var/home/core/zuul-output/0000755000175000017500000000000015115606364014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015115612535015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000001527145615115612526017714 0ustar rootrootDec 08 17:42:10 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 17:42:10 crc kubenswrapper[5116]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:10 crc kubenswrapper[5116]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 17:42:10 crc kubenswrapper[5116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:10 crc kubenswrapper[5116]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:10 crc kubenswrapper[5116]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 17:42:10 crc kubenswrapper[5116]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.397684 5116 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401881 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401926 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401937 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401952 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401960 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401965 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401972 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401977 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401982 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401986 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401991 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.401995 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402000 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402004 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402007 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402011 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402018 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402022 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402026 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402030 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402034 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402038 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402042 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402047 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402051 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402057 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402062 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402067 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402077 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402083 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402089 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402332 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402341 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402346 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402351 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402354 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402359 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402363 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402367 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402371 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402375 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402383 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402387 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402390 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402394 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402398 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402402 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402406 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402410 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402418 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402426 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402431 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402436 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402445 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402449 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402455 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402509 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402513 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402518 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402522 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402526 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402531 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402538 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402545 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402551 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402948 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402965 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402970 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402976 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402980 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402987 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402992 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.402997 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403001 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403005 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403009 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403028 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403040 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403044 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403048 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403052 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403056 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403059 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403063 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403070 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403079 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403827 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403840 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403844 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403848 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403855 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403859 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403863 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403866 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403869 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403873 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403876 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403881 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403884 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403888 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403893 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403897 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403901 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403904 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403907 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403911 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403915 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403920 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403924 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403928 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403934 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403938 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403942 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403947 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403951 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403955 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403958 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403962 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403965 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403969 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403972 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403975 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403979 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403984 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403988 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403991 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403995 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.403999 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404004 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404013 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404017 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404020 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404073 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404092 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404096 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404099 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404103 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404107 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404110 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404114 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404118 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404122 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404125 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404130 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404133 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404137 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404142 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404147 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404151 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404155 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404158 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404162 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404165 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404168 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404172 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404177 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404181 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404184 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404187 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404191 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404194 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404221 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404225 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404228 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404232 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404235 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404261 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404265 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404270 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404274 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404278 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.404282 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404565 5116 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404583 5116 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404594 5116 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404600 5116 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404615 5116 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404619 5116 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404625 5116 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404632 5116 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404636 5116 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404640 5116 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404645 5116 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404648 5116 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404652 5116 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404656 5116 flags.go:64] FLAG: --cgroup-root="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404660 5116 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404663 5116 flags.go:64] FLAG: --client-ca-file="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404667 5116 flags.go:64] FLAG: --cloud-config="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404670 5116 flags.go:64] FLAG: --cloud-provider="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404674 5116 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404680 5116 flags.go:64] FLAG: --cluster-domain="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404685 5116 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404692 5116 flags.go:64] FLAG: --config-dir="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404697 5116 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404702 5116 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404709 5116 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404714 5116 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404718 5116 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404722 5116 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404726 5116 flags.go:64] FLAG: --contention-profiling="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404730 5116 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404734 5116 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404738 5116 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404742 5116 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404827 5116 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404835 5116 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404840 5116 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404844 5116 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404851 5116 flags.go:64] FLAG: --enable-server="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404855 5116 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404862 5116 flags.go:64] FLAG: --event-burst="100" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404867 5116 flags.go:64] FLAG: --event-qps="50" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404871 5116 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404875 5116 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404879 5116 flags.go:64] FLAG: --eviction-hard="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404885 5116 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404889 5116 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404894 5116 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404898 5116 flags.go:64] FLAG: --eviction-soft="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404901 5116 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404905 5116 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404909 5116 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404913 5116 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404916 5116 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404969 5116 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404974 5116 flags.go:64] FLAG: --feature-gates="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404979 5116 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404986 5116 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404991 5116 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404994 5116 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.404999 5116 flags.go:64] FLAG: --healthz-port="10248" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405003 5116 flags.go:64] FLAG: --help="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405007 5116 flags.go:64] FLAG: --hostname-override="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405011 5116 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405015 5116 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405019 5116 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405023 5116 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405027 5116 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405032 5116 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405035 5116 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405039 5116 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405045 5116 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405051 5116 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405056 5116 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405060 5116 flags.go:64] FLAG: --kube-reserved="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405063 5116 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405067 5116 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405071 5116 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405076 5116 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405080 5116 flags.go:64] FLAG: --lock-file="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405084 5116 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405088 5116 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405092 5116 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405110 5116 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405114 5116 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405117 5116 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405125 5116 flags.go:64] FLAG: --logging-format="text" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405129 5116 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405134 5116 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405138 5116 flags.go:64] FLAG: --manifest-url="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405142 5116 flags.go:64] FLAG: --manifest-url-header="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405149 5116 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405153 5116 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405161 5116 flags.go:64] FLAG: --max-pods="110" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405165 5116 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405170 5116 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405174 5116 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405178 5116 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405182 5116 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405187 5116 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405191 5116 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405204 5116 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405208 5116 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405213 5116 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405219 5116 flags.go:64] FLAG: --pod-cidr="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405223 5116 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405233 5116 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405260 5116 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405268 5116 flags.go:64] FLAG: --pods-per-core="0" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405275 5116 flags.go:64] FLAG: --port="10250" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405280 5116 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405285 5116 flags.go:64] FLAG: --provider-id="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405289 5116 flags.go:64] FLAG: --qos-reserved="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405294 5116 flags.go:64] FLAG: --read-only-port="10255" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405299 5116 flags.go:64] FLAG: --register-node="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405303 5116 flags.go:64] FLAG: --register-schedulable="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405308 5116 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405317 5116 flags.go:64] FLAG: --registry-burst="10" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405321 5116 flags.go:64] FLAG: --registry-qps="5" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405328 5116 flags.go:64] FLAG: --reserved-cpus="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405332 5116 flags.go:64] FLAG: --reserved-memory="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405338 5116 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405342 5116 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405346 5116 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405351 5116 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405355 5116 flags.go:64] FLAG: --runonce="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405359 5116 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405363 5116 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405368 5116 flags.go:64] FLAG: --seccomp-default="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405371 5116 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405376 5116 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405381 5116 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405384 5116 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405389 5116 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405394 5116 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405399 5116 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405404 5116 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405410 5116 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405415 5116 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405421 5116 flags.go:64] FLAG: --system-cgroups="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405426 5116 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405434 5116 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405439 5116 flags.go:64] FLAG: --tls-cert-file="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405444 5116 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405492 5116 flags.go:64] FLAG: --tls-min-version="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405499 5116 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405503 5116 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405507 5116 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405512 5116 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405516 5116 flags.go:64] FLAG: --v="2" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405523 5116 flags.go:64] FLAG: --version="false" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405584 5116 flags.go:64] FLAG: --vmodule="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405591 5116 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.405596 5116 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405708 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405713 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405749 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405754 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405758 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405762 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405765 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405769 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405773 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405776 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405780 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405784 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405788 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405791 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405795 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405798 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405805 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405809 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405812 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405816 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405819 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405823 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405830 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405833 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405837 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405841 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405844 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405848 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405851 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405904 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405909 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405913 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405917 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405921 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405925 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405929 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405933 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405937 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405940 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405944 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405947 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405951 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405954 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405958 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405961 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405966 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405970 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405974 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405977 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405982 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405986 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405990 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405994 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.405997 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406003 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406007 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406010 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406015 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406019 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406023 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406026 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406082 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406086 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406090 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406094 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406097 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406100 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406104 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406107 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406111 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406114 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406117 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406120 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406124 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406127 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406131 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406134 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406138 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406142 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406145 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406148 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406152 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406156 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406161 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406165 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.406169 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.406366 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.417701 5116 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.417796 5116 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417863 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417874 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417879 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417884 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417889 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417894 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417900 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417905 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417909 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417913 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417918 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417922 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417927 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417930 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417933 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417938 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417947 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417952 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417957 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417964 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417971 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417977 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417983 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417990 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.417995 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418000 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418006 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418010 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418015 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418020 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418025 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418031 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418035 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418041 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418045 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418049 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418053 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418058 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418063 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418067 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418071 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418076 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418080 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418084 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418089 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418093 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418097 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418101 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418106 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418110 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418115 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418119 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418130 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418136 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418141 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418146 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418150 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418155 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418159 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418165 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418169 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418174 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418179 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418183 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418189 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418194 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418198 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418203 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418207 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418212 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418216 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418221 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418225 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418230 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418234 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418238 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418268 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418274 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418278 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418284 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418289 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418294 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418298 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418302 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418308 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418313 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.418324 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418475 5116 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418487 5116 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418492 5116 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418497 5116 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418502 5116 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418506 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418511 5116 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418516 5116 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418521 5116 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418525 5116 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418531 5116 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418535 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418540 5116 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418544 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418548 5116 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418553 5116 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418557 5116 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418561 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418567 5116 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418572 5116 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418578 5116 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418584 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418589 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418595 5116 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418601 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418606 5116 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418611 5116 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418615 5116 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418620 5116 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418625 5116 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418630 5116 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418635 5116 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418640 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418644 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418649 5116 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418653 5116 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418657 5116 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418662 5116 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418666 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418670 5116 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418674 5116 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418678 5116 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418682 5116 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418697 5116 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418702 5116 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418706 5116 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418710 5116 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418714 5116 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418719 5116 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418723 5116 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418728 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418732 5116 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418738 5116 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418744 5116 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418749 5116 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418753 5116 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418757 5116 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418762 5116 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418766 5116 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418770 5116 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418774 5116 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418779 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418784 5116 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418789 5116 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418793 5116 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418798 5116 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418802 5116 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418806 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418810 5116 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418815 5116 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418819 5116 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418823 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418828 5116 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418832 5116 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418837 5116 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418841 5116 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418849 5116 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418853 5116 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418857 5116 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418862 5116 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418867 5116 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418871 5116 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418875 5116 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418881 5116 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418885 5116 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:42:10 crc kubenswrapper[5116]: W1208 17:42:10.418891 5116 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.418900 5116 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.419394 5116 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.422674 5116 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.426329 5116 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.426514 5116 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.427168 5116 server.go:1019] "Starting client certificate rotation" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.427437 5116 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.427543 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.445228 5116 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.446852 5116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.447234 5116 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.470162 5116 log.go:25] "Validated CRI v1 runtime API" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.495796 5116 log.go:25] "Validated CRI v1 image API" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.498897 5116 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.503988 5116 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-17-36-01-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.504115 5116 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.533347 5116 manager.go:217] Machine: {Timestamp:2025-12-08 17:42:10.531936528 +0000 UTC m=+0.329059782 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:c73531f8-e6a8-4b5d-ad6c-6fcb41671629 BootID:6c0be0cd-5862-4033-9087-93597edbc8cd Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fe:06:92 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fe:06:92 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4c:a2:22 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d3:c0:9e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2a:ce:9c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:1d:78:5d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:0b:d8:c5:e6:d7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:26:42:b5:83:39 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.533580 5116 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.533782 5116 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.534767 5116 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.534815 5116 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.535002 5116 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.535013 5116 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.535034 5116 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.535411 5116 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.535819 5116 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.536332 5116 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.537077 5116 kubelet.go:491] "Attempting to sync node with API server" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.537101 5116 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.537117 5116 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.537131 5116 kubelet.go:397] "Adding apiserver pod source" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.537146 5116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.539202 5116 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.539228 5116 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.539703 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.539713 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.540691 5116 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.540717 5116 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.542618 5116 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.542938 5116 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543433 5116 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543869 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543913 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543929 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543936 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543943 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543950 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543960 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543967 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543979 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.543993 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.544003 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.544130 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.544384 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.544396 5116 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.545176 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.561527 5116 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.561648 5116 server.go:1295] "Started kubelet" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.562043 5116 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.562233 5116 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.562076 5116 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.562931 5116 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 17:42:10 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.567319 5116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.567886 5116 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.568551 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.568582 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="200ms" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.568727 5116 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.568001 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f4e5abc0de53b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.561574203 +0000 UTC m=+0.358697437,LastTimestamp:2025-12-08 17:42:10.561574203 +0000 UTC m=+0.358697437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.569141 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.568741 5116 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.569196 5116 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.572358 5116 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.572402 5116 factory.go:55] Registering systemd factory Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.572417 5116 factory.go:223] Registration of the systemd container factory successfully Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.573404 5116 factory.go:153] Registering CRI-O factory Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.573445 5116 factory.go:223] Registration of the crio container factory successfully Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.573477 5116 factory.go:103] Registering Raw factory Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.573497 5116 manager.go:1196] Started watching for new ooms in manager Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.575109 5116 manager.go:319] Starting recovery of all containers Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.576922 5116 server.go:317] "Adding debug handlers to kubelet server" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.649008 5116 manager.go:324] Recovery completed Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.658945 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659026 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659041 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659051 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659061 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659073 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659084 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659093 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659106 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659118 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659127 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659138 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659147 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659157 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659168 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659181 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659190 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659200 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659211 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659221 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659230 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659264 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659275 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659292 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659312 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659328 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659343 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659358 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659379 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659389 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659399 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659412 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659421 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659432 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659443 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659452 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659462 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659473 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659481 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659491 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659500 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659512 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659524 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659533 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659542 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659550 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659561 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659570 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659686 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659738 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659759 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659768 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659777 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659787 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659797 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659807 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659824 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659834 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659844 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659854 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659866 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659880 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659891 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659905 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659917 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659929 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659939 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659948 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659958 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.659982 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660015 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660032 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660059 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660087 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660102 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660126 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660153 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660166 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660177 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660188 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660200 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660212 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660224 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660266 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660276 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660285 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660293 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660302 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660312 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660322 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660331 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660342 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660352 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660361 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660372 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660381 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660391 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660401 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660414 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660423 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660432 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660442 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660452 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660462 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660471 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660481 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660490 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660507 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660518 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660528 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660538 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660548 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660570 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660579 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660590 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660601 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660610 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660621 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660631 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660640 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660650 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660661 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660670 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660681 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660690 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660701 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660711 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660719 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660730 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660739 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660751 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660761 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660770 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660781 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660791 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660800 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660810 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660820 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660829 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660840 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660850 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660861 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660870 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660879 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660888 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660900 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660911 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660920 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660930 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660941 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660951 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660961 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660972 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660986 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.660998 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661008 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661017 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661026 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661036 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661045 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661053 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661066 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661076 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661087 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661096 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661107 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661117 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661126 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661144 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661191 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661202 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661214 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661224 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661234 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661261 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661270 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661281 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661292 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661303 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661314 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661324 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661334 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661347 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661357 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661367 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661377 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661387 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661397 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661406 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661416 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661426 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661437 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661446 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661457 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661467 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661477 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661489 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.661500 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662571 5116 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662597 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662660 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662676 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662688 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662701 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662712 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662724 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662737 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662749 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662760 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662773 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662787 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662798 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662813 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662824 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662835 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662847 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662857 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662868 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662879 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662890 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662906 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662917 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662928 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662938 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662949 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662982 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.662994 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663004 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663014 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663024 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663034 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663043 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663053 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663062 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663072 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663081 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663089 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663098 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663109 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663117 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663128 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663138 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663170 5116 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663180 5116 reconstruct.go:97] "Volume reconstruction finished" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.663188 5116 reconciler.go:26] "Reconciler: start to sync state" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.669425 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.676461 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.676711 5116 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.678478 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.678539 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.678554 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.678707 5116 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.678765 5116 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.678799 5116 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.678815 5116 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.678908 5116 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.679882 5116 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.679930 5116 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.679970 5116 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.680622 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.688286 5116 policy_none.go:49] "None policy: Start" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.688343 5116 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.688369 5116 state_mem.go:35] "Initializing new in-memory state store" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.747288 5116 manager.go:341] "Starting Device Plugin manager" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.747747 5116 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.747789 5116 server.go:85] "Starting device plugin registration server" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.748548 5116 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.748575 5116 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.748804 5116 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.749064 5116 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.749077 5116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.753026 5116 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.753128 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.770095 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="400ms" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.779390 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.779633 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.780767 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.780902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.780970 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.781819 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782029 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782070 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782519 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782540 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782550 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782602 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782642 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.782654 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.783066 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.783435 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.783520 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.783683 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.783793 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.783810 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.784400 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.784552 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.784633 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.784760 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.784815 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.784845 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.785300 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.785319 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.785347 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.785392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.785427 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.785438 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.786114 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.786360 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.786417 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.786965 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.786995 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.787014 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.787038 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.787016 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.787131 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.789394 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.790626 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.792915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.792976 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.793004 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.812917 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.820399 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.838662 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.849169 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.850972 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.851057 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.851075 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.851117 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.852093 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.857500 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:10 crc kubenswrapper[5116]: E1208 17:42:10.864288 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.968927 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969325 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969454 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969556 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969591 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969616 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969639 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969658 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969678 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969698 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969717 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969734 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969763 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.969883 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970079 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970112 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970141 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970264 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970332 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970368 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970417 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970378 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970500 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970652 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970676 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970727 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970754 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970850 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.970932 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:10 crc kubenswrapper[5116]: I1208 17:42:10.971120 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.052973 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.054359 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.054407 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.054419 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.054452 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.055328 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072019 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072076 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072099 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072122 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072144 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072163 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072172 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072235 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072273 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072283 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072333 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072298 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072350 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072349 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072363 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072305 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072502 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072525 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072545 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072566 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072598 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072616 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072647 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072624 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072661 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072687 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072691 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072698 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072714 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072713 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072734 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.072645 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.113724 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.122034 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.140448 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: W1208 17:42:11.146879 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-b4c52e7a102e2a95ad3a2bdd561599d82b7045eef7de628f4686fa4f6942c2fa WatchSource:0}: Error finding container b4c52e7a102e2a95ad3a2bdd561599d82b7045eef7de628f4686fa4f6942c2fa: Status 404 returned error can't find the container with id b4c52e7a102e2a95ad3a2bdd561599d82b7045eef7de628f4686fa4f6942c2fa Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.154418 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:42:11 crc kubenswrapper[5116]: W1208 17:42:11.155170 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-5ed1ff8ea8d9f5f6dda4464d53e1e4cc3489e8eb16153897befbf5f79ffca4c7 WatchSource:0}: Error finding container 5ed1ff8ea8d9f5f6dda4464d53e1e4cc3489e8eb16153897befbf5f79ffca4c7: Status 404 returned error can't find the container with id 5ed1ff8ea8d9f5f6dda4464d53e1e4cc3489e8eb16153897befbf5f79ffca4c7 Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.157990 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.174109 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.174346 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="800ms" Dec 08 17:42:11 crc kubenswrapper[5116]: W1208 17:42:11.183589 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-d37059b8a4c6441ce123b2997bba2bdacf6f691ef1a5091fafa601ddf373a8b0 WatchSource:0}: Error finding container d37059b8a4c6441ce123b2997bba2bdacf6f691ef1a5091fafa601ddf373a8b0: Status 404 returned error can't find the container with id d37059b8a4c6441ce123b2997bba2bdacf6f691ef1a5091fafa601ddf373a8b0 Dec 08 17:42:11 crc kubenswrapper[5116]: W1208 17:42:11.199409 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-3a47f1a144424d0daeaf6e0458a84f8c9b856fec5c5cd9b8755c0932cd50c998 WatchSource:0}: Error finding container 3a47f1a144424d0daeaf6e0458a84f8c9b856fec5c5cd9b8755c0932cd50c998: Status 404 returned error can't find the container with id 3a47f1a144424d0daeaf6e0458a84f8c9b856fec5c5cd9b8755c0932cd50c998 Dec 08 17:42:11 crc kubenswrapper[5116]: W1208 17:42:11.203489 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-fad2ea99efd55dade15583ade8c8dc191e87b00f827e652f01f5f714c4db9287 WatchSource:0}: Error finding container fad2ea99efd55dade15583ade8c8dc191e87b00f827e652f01f5f714c4db9287: Status 404 returned error can't find the container with id fad2ea99efd55dade15583ade8c8dc191e87b00f827e652f01f5f714c4db9287 Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.455943 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.459470 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.459529 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.459545 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.459582 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.460426 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.549882 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.644978 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.687059 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"3a47f1a144424d0daeaf6e0458a84f8c9b856fec5c5cd9b8755c0932cd50c998"} Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.688193 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fad2ea99efd55dade15583ade8c8dc191e87b00f827e652f01f5f714c4db9287"} Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.689359 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d37059b8a4c6441ce123b2997bba2bdacf6f691ef1a5091fafa601ddf373a8b0"} Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.691374 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b4c52e7a102e2a95ad3a2bdd561599d82b7045eef7de628f4686fa4f6942c2fa"} Dec 08 17:42:11 crc kubenswrapper[5116]: I1208 17:42:11.693418 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"5ed1ff8ea8d9f5f6dda4464d53e1e4cc3489e8eb16153897befbf5f79ffca4c7"} Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.742424 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.835515 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.907223 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:11 crc kubenswrapper[5116]: E1208 17:42:11.974954 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="1.6s" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.261164 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.262118 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.262186 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.262200 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.262260 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:12 crc kubenswrapper[5116]: E1208 17:42:12.263040 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.455724 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:12 crc kubenswrapper[5116]: E1208 17:42:12.461070 5116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.546899 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.699545 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="f6461331454a1612dae010fcda3b49f7c7ae256bc2b784c21063bc4f31e4bd5a" exitCode=0 Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.699630 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"f6461331454a1612dae010fcda3b49f7c7ae256bc2b784c21063bc4f31e4bd5a"} Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.699816 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.701015 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.701060 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.701079 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5116]: E1208 17:42:12.701426 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.703536 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"786bf1bffbc8384fbac1d3048a0cce2f4931695695401a62ea918d04f8869ba7"} Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.703586 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"dbd177b43687887cb390c8c11a09d2c831ab72e0cd7faa9ffbf86ab90e577e90"} Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.705582 5116 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="c9c539925081b7d7490d696aa00ab3e03458779194511381300358de9c8f210e" exitCode=0 Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.705647 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"c9c539925081b7d7490d696aa00ab3e03458779194511381300358de9c8f210e"} Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.705767 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.706547 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.706595 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.706617 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5116]: E1208 17:42:12.706850 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.707835 5116 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="297f8c447aca2d7b37638c04ed6b8d0914e45e109423f09561245d0abb547ac4" exitCode=0 Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.707909 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"297f8c447aca2d7b37638c04ed6b8d0914e45e109423f09561245d0abb547ac4"} Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.707993 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.708759 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.708798 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.708812 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5116]: E1208 17:42:12.709032 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.711284 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549" exitCode=0 Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.711325 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549"} Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.711494 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.712620 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.712664 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.712674 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5116]: E1208 17:42:12.712899 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.720938 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.721834 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.721872 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5116]: I1208 17:42:12.721884 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.575065 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.575771 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="3.2s" Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.650460 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.716062 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2dfac4db355d6dfc6b239f0977f40158fd31faa54a032318866c4464ebec05cb"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.716331 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"46d429d4d2d5baefe924709f1bf0f7a184902bff7b860e632355fdf6759684d9"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.716438 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1405c7d0794a6b509661f080a7f63b6557ec0b95140524b2d76e965ff7af5680"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.716493 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.718834 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.718874 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.718893 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.719123 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.720463 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.720613 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.720678 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.723218 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="490524dfacd789a249b0f07c13ad745790870af9c8d7579ce12ad7b9678c8d33" exitCode=0 Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.723393 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.723266 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"490524dfacd789a249b0f07c13ad745790870af9c8d7579ce12ad7b9678c8d33"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.724625 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.724656 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.724669 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.724895 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.729861 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4746da12a986369cfdb899ff25e77cd7f19e6cab7cd0c286ae0a16f44e498439"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.729915 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9db05f79480a5c8307623409d012e3ac81c52e8b0e7fc208104cf8698592ae4b"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.730263 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.731946 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.731985 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.732001 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.732628 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.734560 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"3e46e220f9815e7df4df57b514f2fb4af572450909f0660c53ba1e6ce4fe6184"} Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.734955 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.736729 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.736773 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.736787 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.737071 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.864031 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.865340 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.865378 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.865391 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.865413 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.865937 5116 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Dec 08 17:42:13 crc kubenswrapper[5116]: E1208 17:42:13.911287 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:13 crc kubenswrapper[5116]: I1208 17:42:13.956897 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.741183 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4cf876a8cfb386d0fc2c68fdb4b8b13c44b57adf3d1d2e50590d70212343a333"} Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.741260 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a"} Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.741454 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.742454 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.742489 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.742500 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:14 crc kubenswrapper[5116]: E1208 17:42:14.742789 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.743456 5116 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="46910f8133758763cbecc09f8b15ef4116e4c7931efbcddf33175d59e1d98007" exitCode=0 Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.743500 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"46910f8133758763cbecc09f8b15ef4116e4c7931efbcddf33175d59e1d98007"} Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.743592 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.743603 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.743592 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.743749 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744148 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744179 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744188 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744228 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744289 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744306 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744384 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744397 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744496 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744509 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.744519 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:14 crc kubenswrapper[5116]: E1208 17:42:14.744647 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:14 crc kubenswrapper[5116]: E1208 17:42:14.745135 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:14 crc kubenswrapper[5116]: E1208 17:42:14.745177 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:14 crc kubenswrapper[5116]: E1208 17:42:14.745419 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:14 crc kubenswrapper[5116]: I1208 17:42:14.758292 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.103609 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.439039 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.750866 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"519be5a3f70ddad33a701b7283712b064c2eeda9e71e6e98a33cd934edbdbefe"} Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.750939 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"77855c3d2ec7900a079f960c0fb121f1cf87da7a90d1b3e563cf15d5db2ad29f"} Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.750960 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"54d79ff7eec6f4ead3294642b73b205817ba6fecb95ebbfe18dc837032da4190"} Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.750973 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"12553f525990c8f10d5357f2d06f3e8a9a83d324a2629d8772ec479cfe410c4f"} Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.750985 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.750985 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.751154 5116 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.751215 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.751968 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752019 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752054 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752097 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752114 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:15 crc kubenswrapper[5116]: E1208 17:42:15.752462 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752483 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752519 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:15 crc kubenswrapper[5116]: I1208 17:42:15.752535 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:15 crc kubenswrapper[5116]: E1208 17:42:15.752778 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:15 crc kubenswrapper[5116]: E1208 17:42:15.752902 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.759357 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"58ae272ceccab459d261709944f9cce6bc15753c8afbe5a7d76acf41c0dc07ba"} Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.759457 5116 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.759468 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.759497 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.760146 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.760169 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.760177 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.760192 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.760222 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.760235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:16 crc kubenswrapper[5116]: E1208 17:42:16.760434 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:16 crc kubenswrapper[5116]: I1208 17:42:16.760617 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:42:16 crc kubenswrapper[5116]: E1208 17:42:16.760674 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.066371 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.067334 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.067382 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.067395 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.067422 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.171183 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.762423 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.762558 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.763393 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.763428 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.763441 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.763503 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.763533 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:17 crc kubenswrapper[5116]: I1208 17:42:17.763548 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:17 crc kubenswrapper[5116]: E1208 17:42:17.763873 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:17 crc kubenswrapper[5116]: E1208 17:42:17.764409 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.260844 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.261123 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.261900 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.261933 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.261947 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:18 crc kubenswrapper[5116]: E1208 17:42:18.262280 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.271071 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.439359 5116 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.439498 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.762914 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.765272 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.766103 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.766151 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:18 crc kubenswrapper[5116]: I1208 17:42:18.766165 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:18 crc kubenswrapper[5116]: E1208 17:42:18.766556 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:19 crc kubenswrapper[5116]: I1208 17:42:19.768072 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:19 crc kubenswrapper[5116]: I1208 17:42:19.768848 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:19 crc kubenswrapper[5116]: I1208 17:42:19.768887 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:19 crc kubenswrapper[5116]: I1208 17:42:19.768899 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:19 crc kubenswrapper[5116]: E1208 17:42:19.769337 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.483527 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.525036 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.525469 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.526956 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.527057 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.527088 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:20 crc kubenswrapper[5116]: E1208 17:42:20.527966 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:20 crc kubenswrapper[5116]: E1208 17:42:20.753570 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.770824 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.771700 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.771757 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:20 crc kubenswrapper[5116]: I1208 17:42:20.771775 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:20 crc kubenswrapper[5116]: E1208 17:42:20.772089 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:23 crc kubenswrapper[5116]: I1208 17:42:23.069856 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 17:42:23 crc kubenswrapper[5116]: I1208 17:42:23.070330 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:23 crc kubenswrapper[5116]: I1208 17:42:23.073435 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:23 crc kubenswrapper[5116]: I1208 17:42:23.073516 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:23 crc kubenswrapper[5116]: I1208 17:42:23.073545 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:23 crc kubenswrapper[5116]: E1208 17:42:23.074411 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:24 crc kubenswrapper[5116]: I1208 17:42:24.547186 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 08 17:42:24 crc kubenswrapper[5116]: I1208 17:42:24.612694 5116 trace.go:236] Trace[194948516]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:14.610) (total time: 10002ms): Dec 08 17:42:24 crc kubenswrapper[5116]: Trace[194948516]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:42:24.612) Dec 08 17:42:24 crc kubenswrapper[5116]: Trace[194948516]: [10.002011422s] [10.002011422s] END Dec 08 17:42:24 crc kubenswrapper[5116]: E1208 17:42:24.613201 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:24 crc kubenswrapper[5116]: I1208 17:42:24.677754 5116 trace.go:236] Trace[130245631]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:14.676) (total time: 10001ms): Dec 08 17:42:24 crc kubenswrapper[5116]: Trace[130245631]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:42:24.677) Dec 08 17:42:24 crc kubenswrapper[5116]: Trace[130245631]: [10.001654162s] [10.001654162s] END Dec 08 17:42:24 crc kubenswrapper[5116]: E1208 17:42:24.677838 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:24 crc kubenswrapper[5116]: I1208 17:42:24.768147 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:42:24 crc kubenswrapper[5116]: I1208 17:42:24.768288 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:42:25 crc kubenswrapper[5116]: I1208 17:42:25.104696 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Dec 08 17:42:25 crc kubenswrapper[5116]: I1208 17:42:25.104783 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Dec 08 17:42:25 crc kubenswrapper[5116]: I1208 17:42:25.179181 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:42:25 crc kubenswrapper[5116]: I1208 17:42:25.179329 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 17:42:26 crc kubenswrapper[5116]: E1208 17:42:26.777496 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 08 17:42:28 crc kubenswrapper[5116]: E1208 17:42:28.011239 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:28 crc kubenswrapper[5116]: I1208 17:42:28.439917 5116 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:42:28 crc kubenswrapper[5116]: I1208 17:42:28.440028 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:42:29 crc kubenswrapper[5116]: E1208 17:42:29.261457 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.143592 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.143958 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.145505 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.145560 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.145576 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.146134 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.149871 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.178437 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5abc0de53b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.561574203 +0000 UTC m=+0.358697437,LastTimestamp:2025-12-08 17:42:10.561574203 +0000 UTC m=+0.358697437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.178852 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.179451 5116 trace.go:236] Trace[1530640036]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:18.064) (total time: 12115ms): Dec 08 17:42:30 crc kubenswrapper[5116]: Trace[1530640036]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 12115ms (17:42:30.179) Dec 08 17:42:30 crc kubenswrapper[5116]: Trace[1530640036]: [12.115188734s] [12.115188734s] END Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.179553 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.179465 5116 trace.go:236] Trace[955940997]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:42:19.548) (total time: 10630ms): Dec 08 17:42:30 crc kubenswrapper[5116]: Trace[955940997]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 10630ms (17:42:30.179) Dec 08 17:42:30 crc kubenswrapper[5116]: Trace[955940997]: [10.630575281s] [10.630575281s] END Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.179598 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.182012 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.183105 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.188298 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306f0ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,LastTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.194411 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac762ddc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.751692226 +0000 UTC m=+0.548815450,LastTimestamp:2025-12-08 17:42:10.751692226 +0000 UTC m=+0.548815450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.202291 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306590e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.780879037 +0000 UTC m=+0.578002271,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.207839 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306c79b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.780960479 +0000 UTC m=+0.578083713,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.212737 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306f0ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306f0ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,LastTimestamp:2025-12-08 17:42:10.781021751 +0000 UTC m=+0.578144985,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.214416 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306590e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.782531154 +0000 UTC m=+0.579654388,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.218593 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306c79b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.782546304 +0000 UTC m=+0.579669538,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.223483 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306f0ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306f0ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,LastTimestamp:2025-12-08 17:42:10.782554515 +0000 UTC m=+0.579677749,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.228670 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306590e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.782633777 +0000 UTC m=+0.579757011,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.231716 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.233518 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306c79b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.782648107 +0000 UTC m=+0.579771331,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.241066 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306f0ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306f0ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,LastTimestamp:2025-12-08 17:42:10.782658918 +0000 UTC m=+0.579782152,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.246296 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306590e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.783767468 +0000 UTC m=+0.580890692,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.250639 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306c79b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.783805839 +0000 UTC m=+0.580929073,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.256389 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306f0ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306f0ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,LastTimestamp:2025-12-08 17:42:10.783815199 +0000 UTC m=+0.580938433,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.261583 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306590e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.784529909 +0000 UTC m=+0.581653143,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.266756 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306c79b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.784617282 +0000 UTC m=+0.581740516,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.273375 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306f0ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306f0ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,LastTimestamp:2025-12-08 17:42:10.784687024 +0000 UTC m=+0.581810258,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.277689 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306590e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.785313321 +0000 UTC m=+0.582436555,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.281907 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306c79b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.785343382 +0000 UTC m=+0.582466616,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.287798 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306f0ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306f0ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678558959 +0000 UTC m=+0.475682193,LastTimestamp:2025-12-08 17:42:10.785353313 +0000 UTC m=+0.582476547,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.293820 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306590e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306590e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678520078 +0000 UTC m=+0.475643312,LastTimestamp:2025-12-08 17:42:10.785413064 +0000 UTC m=+0.582536298,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.299228 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e5ac306c79b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e5ac306c79b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:10.678548379 +0000 UTC m=+0.475671613,LastTimestamp:2025-12-08 17:42:10.785433095 +0000 UTC m=+0.582556329,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.304674 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5adf70674e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.15523259 +0000 UTC m=+0.952355844,LastTimestamp:2025-12-08 17:42:11.15523259 +0000 UTC m=+0.952355844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.309109 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5adf93ec54 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.157560404 +0000 UTC m=+0.954683628,LastTimestamp:2025-12-08 17:42:11.157560404 +0000 UTC m=+0.954683628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.313461 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5ae1533618 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.18687388 +0000 UTC m=+0.983997114,LastTimestamp:2025-12-08 17:42:11.18687388 +0000 UTC m=+0.983997114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.321030 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5ae25bfc4c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.204226124 +0000 UTC m=+1.001349358,LastTimestamp:2025-12-08 17:42:11.204226124 +0000 UTC m=+1.001349358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.325069 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5ae2ed084f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.213731919 +0000 UTC m=+1.010855173,LastTimestamp:2025-12-08 17:42:11.213731919 +0000 UTC m=+1.010855173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.329395 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b110d8b85 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.987614597 +0000 UTC m=+1.784737831,LastTimestamp:2025-12-08 17:42:11.987614597 +0000 UTC m=+1.784737831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.334089 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b11257508 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.989181704 +0000 UTC m=+1.786304938,LastTimestamp:2025-12-08 17:42:11.989181704 +0000 UTC m=+1.786304938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.338537 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5b112d0f77 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.989679991 +0000 UTC m=+1.786803225,LastTimestamp:2025-12-08 17:42:11.989679991 +0000 UTC m=+1.786803225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.342259 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b11339237 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.990106679 +0000 UTC m=+1.787229903,LastTimestamp:2025-12-08 17:42:11.990106679 +0000 UTC m=+1.787229903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.346287 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b11bd08db openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:11.999115483 +0000 UTC m=+1.796238717,LastTimestamp:2025-12-08 17:42:11.999115483 +0000 UTC m=+1.796238717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.350684 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5b11f0434e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.002472782 +0000 UTC m=+1.799596016,LastTimestamp:2025-12-08 17:42:12.002472782 +0000 UTC m=+1.799596016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.355509 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b123c22ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.007445165 +0000 UTC m=+1.804568399,LastTimestamp:2025-12-08 17:42:12.007445165 +0000 UTC m=+1.804568399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.359105 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b1245e9e4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.008085988 +0000 UTC m=+1.805209222,LastTimestamp:2025-12-08 17:42:12.008085988 +0000 UTC m=+1.805209222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.363579 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b12589271 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.009308785 +0000 UTC m=+1.806432019,LastTimestamp:2025-12-08 17:42:12.009308785 +0000 UTC m=+1.806432019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.368939 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b1259b09b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.009382043 +0000 UTC m=+1.806505277,LastTimestamp:2025-12-08 17:42:12.009382043 +0000 UTC m=+1.806505277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.373777 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b1342ff55 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.024672085 +0000 UTC m=+1.821795319,LastTimestamp:2025-12-08 17:42:12.024672085 +0000 UTC m=+1.821795319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.379484 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b2c3dafe2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.443754466 +0000 UTC m=+2.240877700,LastTimestamp:2025-12-08 17:42:12.443754466 +0000 UTC m=+2.240877700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.384583 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b2cf7e283 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.455957123 +0000 UTC m=+2.253080357,LastTimestamp:2025-12-08 17:42:12.455957123 +0000 UTC m=+2.253080357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.389124 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b2d0cc322 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.457325346 +0000 UTC m=+2.254448580,LastTimestamp:2025-12-08 17:42:12.457325346 +0000 UTC m=+2.254448580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.394209 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b3bb148e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.702988514 +0000 UTC m=+2.500111748,LastTimestamp:2025-12-08 17:42:12.702988514 +0000 UTC m=+2.500111748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.398720 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5b3c04294b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.708419915 +0000 UTC m=+2.505543139,LastTimestamp:2025-12-08 17:42:12.708419915 +0000 UTC m=+2.505543139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.403716 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b3c2852f9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.710789881 +0000 UTC m=+2.507913135,LastTimestamp:2025-12-08 17:42:12.710789881 +0000 UTC m=+2.507913135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.408430 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b3cbf327d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:12.720677501 +0000 UTC m=+2.517800755,LastTimestamp:2025-12-08 17:42:12.720677501 +0000 UTC m=+2.517800755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.413527 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b4dc46948 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.00623188 +0000 UTC m=+2.803355114,LastTimestamp:2025-12-08 17:42:13.00623188 +0000 UTC m=+2.803355114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.423719 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b4de8ebfb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.008624635 +0000 UTC m=+2.805747869,LastTimestamp:2025-12-08 17:42:13.008624635 +0000 UTC m=+2.805747869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.425208 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59026->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.425300 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59026->192.168.126.11:17697: read: connection reset by peer" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.425611 5116 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.425731 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.428680 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5b4de93240 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.008642624 +0000 UTC m=+2.805765858,LastTimestamp:2025-12-08 17:42:13.008642624 +0000 UTC m=+2.805765858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.432981 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b4e98a2ee openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.02014027 +0000 UTC m=+2.817263504,LastTimestamp:2025-12-08 17:42:13.02014027 +0000 UTC m=+2.817263504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.437025 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b4f4a564d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.031786061 +0000 UTC m=+2.828909295,LastTimestamp:2025-12-08 17:42:13.031786061 +0000 UTC m=+2.828909295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.441235 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e5b4f6d8913 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.034092819 +0000 UTC m=+2.831216053,LastTimestamp:2025-12-08 17:42:13.034092819 +0000 UTC m=+2.831216053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.445513 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b4f7152f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.034341112 +0000 UTC m=+2.831464336,LastTimestamp:2025-12-08 17:42:13.034341112 +0000 UTC m=+2.831464336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.449745 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b4fb195c5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.038552517 +0000 UTC m=+2.835675751,LastTimestamp:2025-12-08 17:42:13.038552517 +0000 UTC m=+2.835675751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.454465 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b4fc405cc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.039760844 +0000 UTC m=+2.836884078,LastTimestamp:2025-12-08 17:42:13.039760844 +0000 UTC m=+2.836884078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.459764 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b535899d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.099829713 +0000 UTC m=+2.896952947,LastTimestamp:2025-12-08 17:42:13.099829713 +0000 UTC m=+2.896952947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.464361 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b55e43fa0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.142536096 +0000 UTC m=+2.939659330,LastTimestamp:2025-12-08 17:42:13.142536096 +0000 UTC m=+2.939659330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.469068 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b57e57ddd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.176171997 +0000 UTC m=+2.973295251,LastTimestamp:2025-12-08 17:42:13.176171997 +0000 UTC m=+2.973295251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.473237 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b580c4373 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.178712947 +0000 UTC m=+2.975836181,LastTimestamp:2025-12-08 17:42:13.178712947 +0000 UTC m=+2.975836181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.480415 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b5e4d06db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.283620571 +0000 UTC m=+3.080743805,LastTimestamp:2025-12-08 17:42:13.283620571 +0000 UTC m=+3.080743805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.485530 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b5e78dc1d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.286493213 +0000 UTC m=+3.083616447,LastTimestamp:2025-12-08 17:42:13.286493213 +0000 UTC m=+3.083616447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.490407 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b6025c63b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.314602555 +0000 UTC m=+3.111725789,LastTimestamp:2025-12-08 17:42:13.314602555 +0000 UTC m=+3.111725789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.496581 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b603a7aac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.315959468 +0000 UTC m=+3.113082722,LastTimestamp:2025-12-08 17:42:13.315959468 +0000 UTC m=+3.113082722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.501498 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b61508fb4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.33418386 +0000 UTC m=+3.131307084,LastTimestamp:2025-12-08 17:42:13.33418386 +0000 UTC m=+3.131307084,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.506118 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b616ea27a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.336154746 +0000 UTC m=+3.133277980,LastTimestamp:2025-12-08 17:42:13.336154746 +0000 UTC m=+3.133277980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.514354 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b698013ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.471515578 +0000 UTC m=+3.268638812,LastTimestamp:2025-12-08 17:42:13.471515578 +0000 UTC m=+3.268638812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.518953 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5b6a673b24 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.486664484 +0000 UTC m=+3.283787718,LastTimestamp:2025-12-08 17:42:13.486664484 +0000 UTC m=+3.283787718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.522655 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b740225c8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.64781204 +0000 UTC m=+3.444935274,LastTimestamp:2025-12-08 17:42:13.64781204 +0000 UTC m=+3.444935274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.528799 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b74b5ab99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.659577241 +0000 UTC m=+3.456700495,LastTimestamp:2025-12-08 17:42:13.659577241 +0000 UTC m=+3.456700495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.534149 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e5b755c2acf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.670488783 +0000 UTC m=+3.467612017,LastTimestamp:2025-12-08 17:42:13.670488783 +0000 UTC m=+3.467612017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.632958 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.633584 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b76275c58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.683805272 +0000 UTC m=+3.480928516,LastTimestamp:2025-12-08 17:42:13.683805272 +0000 UTC m=+3.480928516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.647925 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b764d0db8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.686275512 +0000 UTC m=+3.483398766,LastTimestamp:2025-12-08 17:42:13.686275512 +0000 UTC m=+3.483398766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.654199 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b78ab6a33 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.726014003 +0000 UTC m=+3.523137227,LastTimestamp:2025-12-08 17:42:13.726014003 +0000 UTC m=+3.523137227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.665422 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b86a29049 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.960314953 +0000 UTC m=+3.757438187,LastTimestamp:2025-12-08 17:42:13.960314953 +0000 UTC m=+3.757438187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.733014 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b878b72dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.975577309 +0000 UTC m=+3.772700543,LastTimestamp:2025-12-08 17:42:13.975577309 +0000 UTC m=+3.772700543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.739358 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b87d38dab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.980302763 +0000 UTC m=+3.777425997,LastTimestamp:2025-12-08 17:42:13.980302763 +0000 UTC m=+3.777425997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.753770 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.756925 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b87e923c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.981717447 +0000 UTC m=+3.778840681,LastTimestamp:2025-12-08 17:42:13.981717447 +0000 UTC m=+3.778840681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.761822 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5b887af10b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.991272715 +0000 UTC m=+3.788395949,LastTimestamp:2025-12-08 17:42:13.991272715 +0000 UTC m=+3.788395949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.766941 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5ba05cd425 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.391952421 +0000 UTC m=+4.189075655,LastTimestamp:2025-12-08 17:42:14.391952421 +0000 UTC m=+4.189075655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.773338 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5ba1395544 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.406403396 +0000 UTC m=+4.203526630,LastTimestamp:2025-12-08 17:42:14.406403396 +0000 UTC m=+4.203526630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.782602 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.782899 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.783834 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.783878 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.783893 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.784232 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.786465 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bb57d915d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.746419549 +0000 UTC m=+4.543542773,LastTimestamp:2025-12-08 17:42:14.746419549 +0000 UTC m=+4.543542773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.791578 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bc27102b2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.963700402 +0000 UTC m=+4.760823636,LastTimestamp:2025-12-08 17:42:14.963700402 +0000 UTC m=+4.760823636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.795709 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bc34232cd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.977409741 +0000 UTC m=+4.774532975,LastTimestamp:2025-12-08 17:42:14.977409741 +0000 UTC m=+4.774532975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.800983 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.802693 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4cf876a8cfb386d0fc2c68fdb4b8b13c44b57adf3d1d2e50590d70212343a333" exitCode=255 Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.802836 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4cf876a8cfb386d0fc2c68fdb4b8b13c44b57adf3d1d2e50590d70212343a333"} Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.803058 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.803032 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bc34d76bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.978148028 +0000 UTC m=+4.775271262,LastTimestamp:2025-12-08 17:42:14.978148028 +0000 UTC m=+4.775271262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.803668 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.803698 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.803707 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.804011 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:30 crc kubenswrapper[5116]: I1208 17:42:30.804280 5116 scope.go:117] "RemoveContainer" containerID="4cf876a8cfb386d0fc2c68fdb4b8b13c44b57adf3d1d2e50590d70212343a333" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.809107 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bce2a287c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.160383612 +0000 UTC m=+4.957506846,LastTimestamp:2025-12-08 17:42:15.160383612 +0000 UTC m=+4.957506846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.813260 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bceb68218 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.169581592 +0000 UTC m=+4.966704826,LastTimestamp:2025-12-08 17:42:15.169581592 +0000 UTC m=+4.966704826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.821081 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bcec8e993 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.170787731 +0000 UTC m=+4.967910965,LastTimestamp:2025-12-08 17:42:15.170787731 +0000 UTC m=+4.967910965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.838475 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bd8aae065 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.336591461 +0000 UTC m=+5.133714695,LastTimestamp:2025-12-08 17:42:15.336591461 +0000 UTC m=+5.133714695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.845188 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bd930a173 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.345357171 +0000 UTC m=+5.142480405,LastTimestamp:2025-12-08 17:42:15.345357171 +0000 UTC m=+5.142480405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.855547 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bd93d7caa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.346199722 +0000 UTC m=+5.143322956,LastTimestamp:2025-12-08 17:42:15.346199722 +0000 UTC m=+5.143322956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.860175 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5be4d7e097 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.540867223 +0000 UTC m=+5.337990457,LastTimestamp:2025-12-08 17:42:15.540867223 +0000 UTC m=+5.337990457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.877482 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5be5b0e12c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.555088684 +0000 UTC m=+5.352211928,LastTimestamp:2025-12-08 17:42:15.555088684 +0000 UTC m=+5.352211928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.889208 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5be5c55a28 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.556430376 +0000 UTC m=+5.353553630,LastTimestamp:2025-12-08 17:42:15.556430376 +0000 UTC m=+5.353553630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.895156 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bf1a98cd6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.755934934 +0000 UTC m=+5.553058168,LastTimestamp:2025-12-08 17:42:15.755934934 +0000 UTC m=+5.553058168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.903359 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e5bf271800a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:15.769038858 +0000 UTC m=+5.566162092,LastTimestamp:2025-12-08 17:42:15.769038858 +0000 UTC m=+5.566162092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.918958 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 17:42:30 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-controller-manager-crc.187f4e5c919cd722 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 08 17:42:30 crc kubenswrapper[5116]: body: Dec 08 17:42:30 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:18.439456546 +0000 UTC m=+8.236579790,LastTimestamp:2025-12-08 17:42:18.439456546 +0000 UTC m=+8.236579790,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:30 crc kubenswrapper[5116]: > Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.933280 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5c919ebd97 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:18.439581079 +0000 UTC m=+8.236704323,LastTimestamp:2025-12-08 17:42:18.439581079 +0000 UTC m=+8.236704323,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.940855 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:30 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e5e0ad63a5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:42:30 crc kubenswrapper[5116]: body: Dec 08 17:42:30 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.768227933 +0000 UTC m=+14.565351207,LastTimestamp:2025-12-08 17:42:24.768227933 +0000 UTC m=+14.565351207,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:30 crc kubenswrapper[5116]: > Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.952282 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e0ad7b868 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:24.768325736 +0000 UTC m=+14.565449010,LastTimestamp:2025-12-08 17:42:24.768325736 +0000 UTC m=+14.565449010,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.962323 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:30 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e5e1ee52fd9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": context deadline exceeded Dec 08 17:42:30 crc kubenswrapper[5116]: body: Dec 08 17:42:30 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.104752601 +0000 UTC m=+14.901875835,LastTimestamp:2025-12-08 17:42:25.104752601 +0000 UTC m=+14.901875835,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:30 crc kubenswrapper[5116]: > Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.970073 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e1ee5faf0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.104804592 +0000 UTC m=+14.901927826,LastTimestamp:2025-12-08 17:42:25.104804592 +0000 UTC m=+14.901927826,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.975518 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:30 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e5e235684e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 17:42:30 crc kubenswrapper[5116]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:42:30 crc kubenswrapper[5116]: Dec 08 17:42:30 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.179288807 +0000 UTC m=+14.976412091,LastTimestamp:2025-12-08 17:42:25.179288807 +0000 UTC m=+14.976412091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:30 crc kubenswrapper[5116]: > Dec 08 17:42:30 crc kubenswrapper[5116]: E1208 17:42:30.980123 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5e2357ec2b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:25.179380779 +0000 UTC m=+14.976504053,LastTimestamp:2025-12-08 17:42:25.179380779 +0000 UTC m=+14.976504053,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:30.986638 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 17:42:31 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-controller-manager-crc.187f4e5ee5b0c713 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 17:42:31 crc kubenswrapper[5116]: body: Dec 08 17:42:31 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:28.439983891 +0000 UTC m=+18.237107125,LastTimestamp:2025-12-08 17:42:28.439983891 +0000 UTC m=+18.237107125,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:31 crc kubenswrapper[5116]: > Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:30.992470 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e5ee5b1dcb2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:28.440054962 +0000 UTC m=+18.237178196,LastTimestamp:2025-12-08 17:42:28.440054962 +0000 UTC m=+18.237178196,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.009688 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:31 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e5f5c05fb3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:59026->192.168.126.11:17697: read: connection reset by peer Dec 08 17:42:31 crc kubenswrapper[5116]: body: Dec 08 17:42:31 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:30.42527929 +0000 UTC m=+20.222402524,LastTimestamp:2025-12-08 17:42:30.42527929 +0000 UTC m=+20.222402524,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:31 crc kubenswrapper[5116]: > Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.014014 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5f5c06a1b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59026->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:30.425321912 +0000 UTC m=+20.222445146,LastTimestamp:2025-12-08 17:42:30.425321912 +0000 UTC m=+20.222445146,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.018590 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:42:31 crc kubenswrapper[5116]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e5f5c0c7e9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:42:31 crc kubenswrapper[5116]: body: Dec 08 17:42:31 crc kubenswrapper[5116]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:30.425706141 +0000 UTC m=+20.222829375,LastTimestamp:2025-12-08 17:42:30.425706141 +0000 UTC m=+20.222829375,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:42:31 crc kubenswrapper[5116]: > Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.026664 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5f5c0f6266 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:30.425895526 +0000 UTC m=+20.223018780,LastTimestamp:2025-12-08 17:42:30.425895526 +0000 UTC m=+20.223018780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.031448 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5b87e923c7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b87e923c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.981717447 +0000 UTC m=+3.778840681,LastTimestamp:2025-12-08 17:42:30.805490249 +0000 UTC m=+20.602613483,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.128993 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5ba05cd425\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5ba05cd425 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.391952421 +0000 UTC m=+4.189075655,LastTimestamp:2025-12-08 17:42:31.123999792 +0000 UTC m=+20.921123026,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.137668 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5ba1395544\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5ba1395544 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.406403396 +0000 UTC m=+4.203526630,LastTimestamp:2025-12-08 17:42:31.133695428 +0000 UTC m=+20.930818662,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:31 crc kubenswrapper[5116]: I1208 17:42:31.571141 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:31 crc kubenswrapper[5116]: I1208 17:42:31.807559 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:42:31 crc kubenswrapper[5116]: I1208 17:42:31.809349 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296"} Dec 08 17:42:31 crc kubenswrapper[5116]: I1208 17:42:31.809448 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:31 crc kubenswrapper[5116]: I1208 17:42:31.809993 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:31 crc kubenswrapper[5116]: I1208 17:42:31.810023 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:31 crc kubenswrapper[5116]: I1208 17:42:31.810035 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:31 crc kubenswrapper[5116]: E1208 17:42:31.810351 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.549825 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.813836 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.814463 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.815997 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296" exitCode=255 Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.816077 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296"} Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.816129 5116 scope.go:117] "RemoveContainer" containerID="4cf876a8cfb386d0fc2c68fdb4b8b13c44b57adf3d1d2e50590d70212343a333" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.816158 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.816916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.816951 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.816960 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:32 crc kubenswrapper[5116]: E1208 17:42:32.817236 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:32 crc kubenswrapper[5116]: I1208 17:42:32.817541 5116 scope.go:117] "RemoveContainer" containerID="eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296" Dec 08 17:42:32 crc kubenswrapper[5116]: E1208 17:42:32.817811 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:32 crc kubenswrapper[5116]: E1208 17:42:32.822425 5116 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5feaa012ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,LastTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.097520 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.097872 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.098901 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.098946 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.098960 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:33 crc kubenswrapper[5116]: E1208 17:42:33.099377 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.111694 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 17:42:33 crc kubenswrapper[5116]: E1208 17:42:33.186848 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.551951 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.820367 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822120 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822120 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822643 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822672 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822682 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822791 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.822802 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:33 crc kubenswrapper[5116]: E1208 17:42:33.822949 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:33 crc kubenswrapper[5116]: I1208 17:42:33.823196 5116 scope.go:117] "RemoveContainer" containerID="eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296" Dec 08 17:42:33 crc kubenswrapper[5116]: E1208 17:42:33.823204 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:33 crc kubenswrapper[5116]: E1208 17:42:33.823394 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:33 crc kubenswrapper[5116]: E1208 17:42:33.831256 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5feaa012ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5feaa012ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,LastTimestamp:2025-12-08 17:42:33.823366568 +0000 UTC m=+23.620489802,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:34 crc kubenswrapper[5116]: I1208 17:42:34.550315 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:34 crc kubenswrapper[5116]: I1208 17:42:34.766272 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:34 crc kubenswrapper[5116]: I1208 17:42:34.825181 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:34 crc kubenswrapper[5116]: I1208 17:42:34.825906 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:34 crc kubenswrapper[5116]: I1208 17:42:34.825962 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:34 crc kubenswrapper[5116]: I1208 17:42:34.825978 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:34 crc kubenswrapper[5116]: E1208 17:42:34.826648 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:34 crc kubenswrapper[5116]: I1208 17:42:34.827075 5116 scope.go:117] "RemoveContainer" containerID="eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296" Dec 08 17:42:34 crc kubenswrapper[5116]: E1208 17:42:34.827418 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:34 crc kubenswrapper[5116]: E1208 17:42:34.832688 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5feaa012ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5feaa012ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,LastTimestamp:2025-12-08 17:42:34.827368982 +0000 UTC m=+24.624492216,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:35 crc kubenswrapper[5116]: E1208 17:42:35.374364 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.444511 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.444750 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.445716 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.445808 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.445831 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:35 crc kubenswrapper[5116]: E1208 17:42:35.446512 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.514454 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.549183 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.827015 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.827621 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.827656 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:35 crc kubenswrapper[5116]: I1208 17:42:35.827670 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:35 crc kubenswrapper[5116]: E1208 17:42:35.827996 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:36 crc kubenswrapper[5116]: I1208 17:42:36.550909 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:36 crc kubenswrapper[5116]: I1208 17:42:36.579408 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:36 crc kubenswrapper[5116]: I1208 17:42:36.580794 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:36 crc kubenswrapper[5116]: I1208 17:42:36.580942 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:36 crc kubenswrapper[5116]: I1208 17:42:36.581061 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:36 crc kubenswrapper[5116]: I1208 17:42:36.581180 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:36 crc kubenswrapper[5116]: E1208 17:42:36.590939 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:37 crc kubenswrapper[5116]: I1208 17:42:37.556119 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:37 crc kubenswrapper[5116]: E1208 17:42:37.639805 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:38 crc kubenswrapper[5116]: I1208 17:42:38.554422 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:38 crc kubenswrapper[5116]: E1208 17:42:38.968760 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:42:39 crc kubenswrapper[5116]: I1208 17:42:39.555231 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:40 crc kubenswrapper[5116]: E1208 17:42:40.127786 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:42:40 crc kubenswrapper[5116]: E1208 17:42:40.196432 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:42:40 crc kubenswrapper[5116]: I1208 17:42:40.547192 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:40 crc kubenswrapper[5116]: E1208 17:42:40.754921 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:41 crc kubenswrapper[5116]: I1208 17:42:41.552567 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:41 crc kubenswrapper[5116]: I1208 17:42:41.809927 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:41 crc kubenswrapper[5116]: I1208 17:42:41.810183 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:41 crc kubenswrapper[5116]: I1208 17:42:41.811045 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:41 crc kubenswrapper[5116]: I1208 17:42:41.811101 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:41 crc kubenswrapper[5116]: I1208 17:42:41.811114 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:41 crc kubenswrapper[5116]: E1208 17:42:41.811459 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:41 crc kubenswrapper[5116]: I1208 17:42:41.811784 5116 scope.go:117] "RemoveContainer" containerID="eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296" Dec 08 17:42:41 crc kubenswrapper[5116]: E1208 17:42:41.811996 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:41 crc kubenswrapper[5116]: E1208 17:42:41.817817 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5feaa012ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5feaa012ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,LastTimestamp:2025-12-08 17:42:41.811969567 +0000 UTC m=+31.609092801,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:42 crc kubenswrapper[5116]: I1208 17:42:42.552612 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:43 crc kubenswrapper[5116]: I1208 17:42:43.554624 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:43 crc kubenswrapper[5116]: I1208 17:42:43.591733 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:43 crc kubenswrapper[5116]: I1208 17:42:43.593695 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:43 crc kubenswrapper[5116]: I1208 17:42:43.593782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:43 crc kubenswrapper[5116]: I1208 17:42:43.593803 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:43 crc kubenswrapper[5116]: I1208 17:42:43.593857 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:43 crc kubenswrapper[5116]: E1208 17:42:43.603411 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:44 crc kubenswrapper[5116]: I1208 17:42:44.550351 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:45 crc kubenswrapper[5116]: I1208 17:42:45.550793 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:46 crc kubenswrapper[5116]: I1208 17:42:46.554162 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:47 crc kubenswrapper[5116]: E1208 17:42:47.202134 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:42:47 crc kubenswrapper[5116]: I1208 17:42:47.551959 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:48 crc kubenswrapper[5116]: I1208 17:42:48.553031 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:49 crc kubenswrapper[5116]: I1208 17:42:49.552941 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:50 crc kubenswrapper[5116]: I1208 17:42:50.550949 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:50 crc kubenswrapper[5116]: I1208 17:42:50.604008 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:50 crc kubenswrapper[5116]: I1208 17:42:50.605434 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:50 crc kubenswrapper[5116]: I1208 17:42:50.605631 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:50 crc kubenswrapper[5116]: I1208 17:42:50.605648 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:50 crc kubenswrapper[5116]: I1208 17:42:50.605692 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:50 crc kubenswrapper[5116]: E1208 17:42:50.620174 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:50 crc kubenswrapper[5116]: E1208 17:42:50.755895 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:42:51 crc kubenswrapper[5116]: I1208 17:42:51.552487 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:52 crc kubenswrapper[5116]: I1208 17:42:52.551060 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:53 crc kubenswrapper[5116]: I1208 17:42:53.552069 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:54 crc kubenswrapper[5116]: E1208 17:42:54.220371 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:42:54 crc kubenswrapper[5116]: I1208 17:42:54.551539 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:55 crc kubenswrapper[5116]: I1208 17:42:55.553698 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:56 crc kubenswrapper[5116]: E1208 17:42:56.481196 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:42:56 crc kubenswrapper[5116]: I1208 17:42:56.552634 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:56 crc kubenswrapper[5116]: I1208 17:42:56.680276 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:56 crc kubenswrapper[5116]: I1208 17:42:56.682316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:56 crc kubenswrapper[5116]: I1208 17:42:56.682363 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:56 crc kubenswrapper[5116]: I1208 17:42:56.682374 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:56 crc kubenswrapper[5116]: E1208 17:42:56.682887 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:56 crc kubenswrapper[5116]: I1208 17:42:56.683177 5116 scope.go:117] "RemoveContainer" containerID="eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296" Dec 08 17:42:56 crc kubenswrapper[5116]: E1208 17:42:56.695027 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5b87e923c7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5b87e923c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:13.981717447 +0000 UTC m=+3.778840681,LastTimestamp:2025-12-08 17:42:56.68563793 +0000 UTC m=+46.482761174,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:56 crc kubenswrapper[5116]: E1208 17:42:56.987136 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5ba05cd425\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5ba05cd425 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.391952421 +0000 UTC m=+4.189075655,LastTimestamp:2025-12-08 17:42:56.978889086 +0000 UTC m=+46.776012360,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:56 crc kubenswrapper[5116]: E1208 17:42:56.997764 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5ba1395544\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5ba1395544 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:14.406403396 +0000 UTC m=+4.203526630,LastTimestamp:2025-12-08 17:42:56.990861972 +0000 UTC m=+46.787985236,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.549883 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.621212 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.622274 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.622335 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.622351 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.622380 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:42:57 crc kubenswrapper[5116]: E1208 17:42:57.634159 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.925401 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.928184 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf"} Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.928498 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.929303 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.929349 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:57 crc kubenswrapper[5116]: I1208 17:42:57.929360 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:57 crc kubenswrapper[5116]: E1208 17:42:57.929772 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.550920 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.934192 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.934773 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.936927 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf" exitCode=255 Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.936988 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf"} Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.937037 5116 scope.go:117] "RemoveContainer" containerID="eb9c5000ba958106ac44d82662175377f15d153b9fa968f29b2098acfe0f5296" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.937202 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.937846 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.937887 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.937898 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:58 crc kubenswrapper[5116]: E1208 17:42:58.938283 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:42:58 crc kubenswrapper[5116]: I1208 17:42:58.938523 5116 scope.go:117] "RemoveContainer" containerID="6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf" Dec 08 17:42:58 crc kubenswrapper[5116]: E1208 17:42:58.938772 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:58 crc kubenswrapper[5116]: E1208 17:42:58.943903 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5feaa012ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5feaa012ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,LastTimestamp:2025-12-08 17:42:58.938733024 +0000 UTC m=+48.735856258,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:42:58 crc kubenswrapper[5116]: E1208 17:42:58.982648 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:42:59 crc kubenswrapper[5116]: I1208 17:42:59.551942 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:42:59 crc kubenswrapper[5116]: I1208 17:42:59.944007 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:43:00 crc kubenswrapper[5116]: I1208 17:43:00.551533 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:00 crc kubenswrapper[5116]: E1208 17:43:00.756523 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:01 crc kubenswrapper[5116]: E1208 17:43:01.225536 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:43:01 crc kubenswrapper[5116]: I1208 17:43:01.554525 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:02 crc kubenswrapper[5116]: E1208 17:43:02.469486 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:43:02 crc kubenswrapper[5116]: I1208 17:43:02.551305 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:02 crc kubenswrapper[5116]: E1208 17:43:02.595944 5116 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:43:03 crc kubenswrapper[5116]: I1208 17:43:03.552160 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.555838 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.634521 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.635691 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.635742 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.635760 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.635793 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:04 crc kubenswrapper[5116]: E1208 17:43:04.645210 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.766307 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.766773 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.768316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.768375 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.768399 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:04 crc kubenswrapper[5116]: E1208 17:43:04.768933 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:04 crc kubenswrapper[5116]: I1208 17:43:04.769380 5116 scope.go:117] "RemoveContainer" containerID="6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf" Dec 08 17:43:04 crc kubenswrapper[5116]: E1208 17:43:04.769703 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:04 crc kubenswrapper[5116]: E1208 17:43:04.775157 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5feaa012ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5feaa012ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,LastTimestamp:2025-12-08 17:43:04.769656028 +0000 UTC m=+54.566779302,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:05 crc kubenswrapper[5116]: I1208 17:43:05.550159 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:05 crc kubenswrapper[5116]: I1208 17:43:05.757158 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:43:05 crc kubenswrapper[5116]: I1208 17:43:05.757358 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:05 crc kubenswrapper[5116]: I1208 17:43:05.758364 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:05 crc kubenswrapper[5116]: I1208 17:43:05.758419 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:05 crc kubenswrapper[5116]: I1208 17:43:05.758432 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:05 crc kubenswrapper[5116]: E1208 17:43:05.758694 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:06 crc kubenswrapper[5116]: I1208 17:43:06.551784 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:07 crc kubenswrapper[5116]: I1208 17:43:07.552118 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:07 crc kubenswrapper[5116]: I1208 17:43:07.929702 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:07 crc kubenswrapper[5116]: I1208 17:43:07.929921 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:07 crc kubenswrapper[5116]: I1208 17:43:07.931136 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:07 crc kubenswrapper[5116]: I1208 17:43:07.931187 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:07 crc kubenswrapper[5116]: I1208 17:43:07.931236 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:07 crc kubenswrapper[5116]: E1208 17:43:07.931825 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:07 crc kubenswrapper[5116]: I1208 17:43:07.932150 5116 scope.go:117] "RemoveContainer" containerID="6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf" Dec 08 17:43:07 crc kubenswrapper[5116]: E1208 17:43:07.932431 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:07 crc kubenswrapper[5116]: E1208 17:43:07.939513 5116 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e5feaa012ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e5feaa012ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:42:32.81774251 +0000 UTC m=+22.614865744,LastTimestamp:2025-12-08 17:43:07.932377976 +0000 UTC m=+57.729501230,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:43:08 crc kubenswrapper[5116]: E1208 17:43:08.234818 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:43:08 crc kubenswrapper[5116]: I1208 17:43:08.552867 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:09 crc kubenswrapper[5116]: I1208 17:43:09.551431 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:10 crc kubenswrapper[5116]: I1208 17:43:10.551546 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:10 crc kubenswrapper[5116]: E1208 17:43:10.757016 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:11 crc kubenswrapper[5116]: I1208 17:43:11.548201 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:11 crc kubenswrapper[5116]: I1208 17:43:11.645609 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:11 crc kubenswrapper[5116]: I1208 17:43:11.647859 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:11 crc kubenswrapper[5116]: I1208 17:43:11.647938 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:11 crc kubenswrapper[5116]: I1208 17:43:11.647968 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:11 crc kubenswrapper[5116]: I1208 17:43:11.648017 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:11 crc kubenswrapper[5116]: E1208 17:43:11.667348 5116 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:43:12 crc kubenswrapper[5116]: I1208 17:43:12.550753 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:13 crc kubenswrapper[5116]: I1208 17:43:13.554631 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:14 crc kubenswrapper[5116]: I1208 17:43:14.552488 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:15 crc kubenswrapper[5116]: E1208 17:43:15.243292 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:43:15 crc kubenswrapper[5116]: I1208 17:43:15.551636 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:16 crc kubenswrapper[5116]: I1208 17:43:16.551783 5116 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:43:16 crc kubenswrapper[5116]: I1208 17:43:16.843307 5116 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-bpfhx" Dec 08 17:43:16 crc kubenswrapper[5116]: I1208 17:43:16.850872 5116 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-bpfhx" Dec 08 17:43:16 crc kubenswrapper[5116]: I1208 17:43:16.856678 5116 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 17:43:17 crc kubenswrapper[5116]: I1208 17:43:17.428066 5116 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 17:43:17 crc kubenswrapper[5116]: I1208 17:43:17.852754 5116 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 17:38:16 +0000 UTC" deadline="2026-01-04 00:46:28.459751081 +0000 UTC" Dec 08 17:43:17 crc kubenswrapper[5116]: I1208 17:43:17.852806 5116 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="631h3m10.606949667s" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.667671 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.668703 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.668768 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.668779 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.668875 5116 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.679621 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.679697 5116 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.680165 5116 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.680188 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.680722 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.680772 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.680795 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.681402 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.681796 5116 scope.go:117] "RemoveContainer" containerID="6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.686709 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.686759 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.686772 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.686790 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.686805 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:18Z","lastTransitionTime":"2025-12-08T17:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.702621 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.712788 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.712841 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.712852 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.712868 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.712878 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:18Z","lastTransitionTime":"2025-12-08T17:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.726629 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.740545 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.740612 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.740626 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.740644 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.740657 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:18Z","lastTransitionTime":"2025-12-08T17:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.753929 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.766755 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.766815 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.766828 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.766845 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:18 crc kubenswrapper[5116]: I1208 17:43:18.766856 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:18Z","lastTransitionTime":"2025-12-08T17:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.777635 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.777787 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.777825 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.878305 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:18 crc kubenswrapper[5116]: E1208 17:43:18.979092 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: I1208 17:43:19.002905 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:43:19 crc kubenswrapper[5116]: I1208 17:43:19.005028 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3"} Dec 08 17:43:19 crc kubenswrapper[5116]: I1208 17:43:19.005398 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:19 crc kubenswrapper[5116]: I1208 17:43:19.006225 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:19 crc kubenswrapper[5116]: I1208 17:43:19.006306 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:19 crc kubenswrapper[5116]: I1208 17:43:19.006326 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.006902 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.079299 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.180172 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.281229 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.382379 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.483384 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.584354 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.685153 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.786071 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.887119 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:19 crc kubenswrapper[5116]: E1208 17:43:19.987956 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.088830 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.189568 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.289918 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.390208 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.491383 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.592312 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.692516 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.758194 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.793400 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.894518 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:20 crc kubenswrapper[5116]: E1208 17:43:20.995539 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.011568 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.012135 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.013983 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" exitCode=255 Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.014078 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3"} Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.014162 5116 scope.go:117] "RemoveContainer" containerID="6375dd3c9fd0a412c38a98f8f71a727fb571c4acbf6fd3e040a398b792cf40cf" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.014454 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.015687 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.015745 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.015758 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.016357 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:21 crc kubenswrapper[5116]: I1208 17:43:21.016757 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.017132 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.096061 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.197134 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.297934 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.398908 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.500115 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.600574 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.701100 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.802117 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:21 crc kubenswrapper[5116]: E1208 17:43:21.903273 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.004310 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: I1208 17:43:22.018910 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.105398 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.206088 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.306790 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.407441 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.508018 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.608414 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.709147 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.810309 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:22 crc kubenswrapper[5116]: E1208 17:43:22.910626 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.011279 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.111613 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.212351 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.313456 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.413702 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.514490 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.614613 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.715716 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.816583 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:23 crc kubenswrapper[5116]: E1208 17:43:23.917537 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.017870 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.118832 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.219691 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.320775 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.421079 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.521709 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.622401 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.723076 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: I1208 17:43:24.765546 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:24 crc kubenswrapper[5116]: I1208 17:43:24.766458 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:24 crc kubenswrapper[5116]: I1208 17:43:24.768227 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:24 crc kubenswrapper[5116]: I1208 17:43:24.768468 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:24 crc kubenswrapper[5116]: I1208 17:43:24.768654 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.769378 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:24 crc kubenswrapper[5116]: I1208 17:43:24.769823 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.770195 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.824361 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:24 crc kubenswrapper[5116]: E1208 17:43:24.924985 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.025985 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.127217 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.227800 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.329136 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.430229 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.530683 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.632455 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.734911 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.835953 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:25 crc kubenswrapper[5116]: E1208 17:43:25.937467 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.038105 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.139271 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.240837 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.341909 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.442948 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.544017 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.644799 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.745372 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.845497 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:26 crc kubenswrapper[5116]: E1208 17:43:26.946497 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.046957 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.147227 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.248483 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.349361 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.449877 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.550099 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.650580 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.751272 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.852110 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:27 crc kubenswrapper[5116]: E1208 17:43:27.953054 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.053388 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.153861 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.254484 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.355175 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.456952 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.557317 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.658071 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.758327 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.813316 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.818916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.818984 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.818994 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.819017 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.819028 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.832184 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.844235 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.844299 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.844316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.844335 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.844347 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.858895 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.870856 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.870920 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.870940 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.870966 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.871070 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.884682 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.893911 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.893956 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.893968 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.893996 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:28 crc kubenswrapper[5116]: I1208 17:43:28.894009 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:28Z","lastTransitionTime":"2025-12-08T17:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.911377 5116 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6c0be0cd-5862-4033-9087-93597edbc8cd\\\",\\\"systemUUID\\\":\\\"c73531f8-e6a8-4b5d-ad6c-6fcb41671629\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.911583 5116 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:43:28 crc kubenswrapper[5116]: E1208 17:43:28.911635 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: I1208 17:43:29.006307 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:29 crc kubenswrapper[5116]: I1208 17:43:29.006768 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:29 crc kubenswrapper[5116]: I1208 17:43:29.008093 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:29 crc kubenswrapper[5116]: I1208 17:43:29.008228 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:29 crc kubenswrapper[5116]: I1208 17:43:29.008313 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.009464 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:29 crc kubenswrapper[5116]: I1208 17:43:29.010008 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.010691 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.011767 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.112888 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.213411 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.313519 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.414553 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: I1208 17:43:29.427759 5116 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.515406 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.616435 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.717219 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.818307 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:29 crc kubenswrapper[5116]: E1208 17:43:29.919413 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.019514 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.120051 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.221171 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.322357 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.423078 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.524318 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.624945 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: I1208 17:43:30.721570 5116 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.725687 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.758742 5116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.826559 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:30 crc kubenswrapper[5116]: E1208 17:43:30.926741 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.027640 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.127788 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.228803 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.329665 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.430091 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.530983 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.631431 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.679515 5116 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.680518 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.680578 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.680598 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.681208 5116 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.732417 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: E1208 17:43:31.833591 5116 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.857364 5116 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.868401 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.883891 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.935915 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.936007 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.936024 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.936041 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.936085 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:31Z","lastTransitionTime":"2025-12-08T17:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:31 crc kubenswrapper[5116]: I1208 17:43:31.984138 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.038849 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.038936 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.038967 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.039000 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.039027 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.087066 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.141807 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.141888 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.141916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.141943 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.141960 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.185161 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.244723 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.244794 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.244814 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.244840 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.244862 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.346693 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.346996 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.347402 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.347559 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.347579 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.450596 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.450699 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.450731 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.450758 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.450777 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.553782 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.553835 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.553844 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.553861 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.553872 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.629992 5116 apiserver.go:52] "Watching apiserver" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.639148 5116 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.639818 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv","openshift-etcd/etcd-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-frh5r","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-node-zm56h","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-8wqqf","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-5phkw","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-node-identity/network-node-identity-dgvkt","openshift-image-registry/node-ca-ps59m","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-additional-cni-plugins-p56xf","openshift-multus/network-metrics-daemon-5ft89","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv"] Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.641956 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.643315 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.643514 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.644528 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.644624 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.645529 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.645882 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.646108 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.647066 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.647111 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.647194 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.648005 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.648080 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.648012 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.648589 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.649582 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.649814 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.649836 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.655886 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.655935 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.655948 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.655966 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.655977 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.662995 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.670355 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.670430 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.672603 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.672950 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.672996 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.673330 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.673339 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.673407 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.673426 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.673571 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.675425 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.675675 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.677526 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.677783 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.680810 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.682636 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.685053 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.685077 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.685170 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.685311 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.685338 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.685682 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.685958 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.686268 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.687928 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.688700 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.688270 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.688191 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.689525 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.693341 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.693428 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.693503 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.695816 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.695827 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.696124 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.696127 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.696411 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.697228 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.698917 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.699275 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.699462 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.699737 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.702501 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.717909 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.728300 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.740834 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.751439 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c5ed2a1-80ce-4bdc-bbd9-5e3661f5800d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1405c7d0794a6b509661f080a7f63b6557ec0b95140524b2d76e965ff7af5680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://46d429d4d2d5baefe924709f1bf0f7a184902bff7b860e632355fdf6759684d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2dfac4db355d6dfc6b239f0977f40158fd31faa54a032318866c4464ebec05cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://297f8c447aca2d7b37638c04ed6b8d0914e45e109423f09561245d0abb547ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297f8c447aca2d7b37638c04ed6b8d0914e45e109423f09561245d0abb547ac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.758197 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.758608 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.758708 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.758801 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.758887 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.764737 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.764826 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cnibin\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.764850 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9rs\" (UniqueName: \"kubernetes.io/projected/4f09ae7f-7717-4477-b860-d6bc280c6fd6-kube-api-access-hv9rs\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.764872 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-socket-dir-parent\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.764895 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-conf-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765007 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765038 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765068 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84b46b92-c78c-44c8-a27b-4a20c47acd75-cni-binary-copy\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765087 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-k8s-cni-cncf-io\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765103 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-cni-bin\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765132 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-netns\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765150 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765172 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cni-binary-copy\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765193 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765218 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765309 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765332 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-cnibin\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765350 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-multus-certs\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765373 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkvm\" (UniqueName: \"kubernetes.io/projected/84b46b92-c78c-44c8-a27b-4a20c47acd75-kube-api-access-btkvm\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765392 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-hostroot\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765411 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765445 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765556 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765579 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765596 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765613 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765628 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-system-cni-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765642 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-cni-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765657 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765701 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-system-cni-dir\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765762 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-os-release\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765782 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-etc-kubernetes\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765804 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-kubelet\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765825 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765845 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765861 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-cni-multus\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765879 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-os-release\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765896 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-daemon-config\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.765990 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.766096 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.766267 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:33.266157794 +0000 UTC m=+83.063281028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.766827 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.766973 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:33.266941995 +0000 UTC m=+83.064065259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.767173 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.768352 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.768741 5116 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.768784 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.769401 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.771299 5116 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.785153 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.785213 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.785234 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.785354 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:33.285329904 +0000 UTC m=+83.082453168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.786008 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.788870 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.790374 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.790436 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.790469 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.790524 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.790621 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:33.290595251 +0000 UTC m=+83.087718485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.792197 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cf2230-8798-4fb0-b89b-43901121fd07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zm56h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.795449 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.800098 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19151390-7d67-4ae9-8520-ae20b8eb46f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.801457 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.811169 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189e0ebf-9023-4b40-8604-9b4c2dab2104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:43:20Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1208 17:43:19.645364 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:43:19.645509 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:43:19.646375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-104845118/tls.crt::/tmp/serving-cert-104845118/tls.key\\\\\\\"\\\\nI1208 17:43:20.025477 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:43:20.027657 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:43:20.027680 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:43:20.027714 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:43:20.027723 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:43:20.032328 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:43:20.032364 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:43:20.032389 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:43:20.032397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:43:20.032403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:43:20.032408 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:43:20.032412 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:43:20.032416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:43:20.034300 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.821378 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.830821 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.838828 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.847833 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-8wqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84b46b92-c78c-44c8-a27b-4a20c47acd75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btkvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8wqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.855096 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-5phkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9b8c7c0-e0b8-44ea-adc9-41342c754061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qx55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5phkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.861747 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.861817 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.861837 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.861864 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.861882 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.863070 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ps59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kflvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ps59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.866464 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.866506 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.866528 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.866549 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867448 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867502 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867541 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867576 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867636 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867662 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867689 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867713 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867736 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867761 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.867789 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.868572 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.868734 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869416 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869425 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869547 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869512 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869801 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869862 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869914 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869961 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869999 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.869997 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.870044 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.870546 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.870796 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871505 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871569 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871621 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871694 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871736 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871805 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871901 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871954 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872001 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872072 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872118 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872175 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872234 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872320 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872365 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872403 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871341 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872692 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.873503 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.873696 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871417 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871718 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.871945 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872053 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872312 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872540 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872556 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872608 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.872650 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.873904 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874025 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874050 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874151 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874145 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874341 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874625 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874677 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e7c2199-9693-42b9-9431-2b12b5abe1d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmvhd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmvhd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-47dgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874433 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.879235 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.879569 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.879625 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.879703 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.879788 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.879868 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.879950 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880342 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880423 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880478 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880521 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880568 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880641 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880733 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.880993 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881081 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881214 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881306 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881361 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881103 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.874866 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.875019 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.877095 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.877886 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881433 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881728 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.881885 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.882144 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.882527 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.882555 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.882677 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.882780 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.882902 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.882977 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.883073 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.883151 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.883214 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.883356 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.883492 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.883588 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: E1208 17:43:32.883666 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:33.383637636 +0000 UTC m=+83.180760910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.883847 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.884210 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.884602 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.884625 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.884962 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.885225 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.885432 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.885698 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.885807 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.885825 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.885984 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886178 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886285 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886452 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886543 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886630 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886708 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886798 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886885 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886968 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887042 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887119 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887235 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887392 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887486 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887605 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887726 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887814 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887895 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.888077 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.888181 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.888278 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.888422 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.890517 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.890919 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.892320 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.892378 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.892436 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.894234 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.894427 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.894584 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.894653 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.895226 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.895369 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.897141 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.898758 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.899056 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.900199 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.901104 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.901146 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.901177 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.902522 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.902574 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.902607 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.905403 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908355 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908404 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908890 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908931 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908957 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908985 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.909015 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.909038 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.909762 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.909838 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.909875 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.912775 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.912822 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.912850 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.912872 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.912892 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.917062 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.917126 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.917155 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.925961 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.926483 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927286 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927370 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927424 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927462 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927510 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927551 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927594 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927631 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927760 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927807 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927616 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd44def-9e26-444f-aaf8-36eb0c152d06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://786bf1bffbc8384fbac1d3048a0cce2f4931695695401a62ea918d04f8869ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dbd177b43687887cb390c8c11a09d2c831ab72e0cd7faa9ffbf86ab90e577e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9db05f79480a5c8307623409d012e3ac81c52e8b0e7fc208104cf8698592ae4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4746da12a986369cfdb899ff25e77cd7f19e6cab7cd0c286ae0a16f44e498439\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928114 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928716 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928759 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928790 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928830 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928861 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928886 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928912 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928936 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928960 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929001 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929036 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929062 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929085 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929107 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929128 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929150 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929181 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929204 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929224 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929271 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929435 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929506 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929542 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929577 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929613 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929645 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929681 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929715 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929754 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929786 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929817 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929848 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929877 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929909 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929940 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929972 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930001 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930031 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930079 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930115 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930144 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930173 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930205 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930267 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930323 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930417 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930475 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930529 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930577 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930635 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930708 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930772 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930813 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930924 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.931010 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886466 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886724 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.886813 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.931557 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.931626 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.931673 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933862 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933913 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933950 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933983 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934021 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934053 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934083 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934123 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934156 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934185 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934225 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934319 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934384 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934413 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934443 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934469 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934499 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934529 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934556 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934585 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934616 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934646 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934676 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934764 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-bin\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934793 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-rootfs\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934833 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbpbq\" (UniqueName: \"kubernetes.io/projected/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-kube-api-access-lbpbq\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934870 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-serviceca\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934893 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-var-lib-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934911 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-ovn\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934933 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-proxy-tls\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.932551 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.932572 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934969 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.888306 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.888916 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.889682 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.889805 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.890668 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.890851 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935314 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935319 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.892277 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.890493 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.892533 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.892566 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.893008 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.893033 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.891732 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935496 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.894926 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.896050 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.896121 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.897049 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.897836 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.898700 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.900075 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.900507 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.900774 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.901852 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.902401 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.902983 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.903546 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.905364 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.906881 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.907159 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908321 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908337 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935695 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.908842 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.910437 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.910895 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.911631 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.911742 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.914210 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.914559 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.914924 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.915628 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.916897 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.916952 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.916961 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.909392 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.925676 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.917749 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.926212 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927102 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927110 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.926808 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.926231 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927137 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927324 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.927915 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928810 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.928988 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929022 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929133 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929400 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.929654 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930445 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930678 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.930809 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.931143 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.931161 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.887530 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933203 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933224 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933433 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933608 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.931744 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933771 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933900 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933922 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933954 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.933974 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934130 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934187 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934315 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934605 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934786 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.934905 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.888449 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935043 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.894185 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935736 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935740 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935956 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.935998 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.936285 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.936369 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.936566 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.936697 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.936967 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.937190 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.937495 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.937509 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.937563 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.937877 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.937922 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938298 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938433 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938442 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938512 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938558 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938622 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938728 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.938947 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939088 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939123 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939313 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939342 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939460 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939631 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939676 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.939989 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940069 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvhd\" (UniqueName: \"kubernetes.io/projected/2e7c2199-9693-42b9-9431-2b12b5abe1d1-kube-api-access-vmvhd\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940125 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940329 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-netns\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940328 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940335 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940432 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940506 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-netns\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940543 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940572 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-netns\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940606 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940685 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.940729 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cni-binary-copy\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941111 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17cf2230-8798-4fb0-b89b-43901121fd07-ovn-node-metrics-cert\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941321 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941401 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941420 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-cnibin\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941472 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-multus-certs\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941504 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-btkvm\" (UniqueName: \"kubernetes.io/projected/84b46b92-c78c-44c8-a27b-4a20c47acd75-kube-api-access-btkvm\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941548 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-ovn-kubernetes\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941582 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-hostroot\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941626 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-etc-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941651 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941685 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941736 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-hostroot\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.941751 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942079 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942278 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-multus-certs\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942390 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-cnibin\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942500 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e9b8c7c0-e0b8-44ea-adc9-41342c754061-tmp-dir\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942558 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-netd\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942600 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-config\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942641 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942676 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-system-cni-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942701 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942713 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-cni-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942767 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942776 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942806 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-system-cni-dir\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942865 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-os-release\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942863 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-system-cni-dir\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942878 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-system-cni-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942899 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-etc-kubernetes\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942935 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-cni-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942940 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflvf\" (UniqueName: \"kubernetes.io/projected/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-kube-api-access-kflvf\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942969 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-etc-kubernetes\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.942977 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-os-release\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943105 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cni-binary-copy\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943099 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-node-log\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943191 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-script-lib\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943230 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943291 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-kubelet\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943321 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qx55\" (UniqueName: \"kubernetes.io/projected/e9b8c7c0-e0b8-44ea-adc9-41342c754061-kube-api-access-8qx55\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943337 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-kubelet\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943343 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-systemd-units\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943395 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9wf\" (UniqueName: \"kubernetes.io/projected/17cf2230-8798-4fb0-b89b-43901121fd07-kube-api-access-8s9wf\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943437 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943463 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-cni-multus\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943488 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e9b8c7c0-e0b8-44ea-adc9-41342c754061-hosts-file\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943504 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943511 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-kubelet\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943558 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943592 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-os-release\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943617 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-daemon-config\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943650 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk87\" (UniqueName: \"kubernetes.io/projected/19151390-7d67-4ae9-8520-ae20b8eb46f8-kube-api-access-2fk87\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943667 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-systemd\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943673 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-os-release\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943687 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943936 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.943976 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-host\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944213 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-env-overrides\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944285 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-mcd-auth-proxy-config\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944409 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cnibin\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944430 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944442 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hv9rs\" (UniqueName: \"kubernetes.io/projected/4f09ae7f-7717-4477-b860-d6bc280c6fd6-kube-api-access-hv9rs\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944475 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-socket-dir-parent\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944493 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-conf-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944513 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-slash\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944549 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944567 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84b46b92-c78c-44c8-a27b-4a20c47acd75-cni-binary-copy\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944570 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-socket-dir-parent\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944585 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-k8s-cni-cncf-io\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944450 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cnibin\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944605 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-cni-bin\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944610 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944625 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-log-socket\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944724 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944734 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944744 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944754 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944763 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944774 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944785 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944796 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944808 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944817 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944827 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944828 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-conf-dir\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944883 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-cni-bin\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944784 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-run-k8s-cni-cncf-io\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944836 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944920 5116 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944966 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944979 5116 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944990 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.944989 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945001 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945042 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945052 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945062 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945072 5116 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945081 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945115 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945126 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945135 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945145 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945154 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945164 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945197 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945209 5116 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945218 5116 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945227 5116 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945264 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945270 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945311 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945324 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/84b46b92-c78c-44c8-a27b-4a20c47acd75-multus-daemon-config\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945331 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945388 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945406 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945421 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945436 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945467 5116 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945482 5116 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945496 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945509 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945523 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945537 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945552 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945577 5116 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945591 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945605 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945618 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945631 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945644 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945662 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945675 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945690 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945703 5116 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945717 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945635 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945745 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945758 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945771 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945782 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945796 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945808 5116 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945820 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945834 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945829 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4f09ae7f-7717-4477-b860-d6bc280c6fd6-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945847 5116 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945863 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945878 5116 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945891 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945905 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945919 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945933 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945963 5116 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945976 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945988 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.945893 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/84b46b92-c78c-44c8-a27b-4a20c47acd75-host-var-lib-cni-multus\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946002 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946270 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946457 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946540 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946616 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946645 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946668 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946687 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946704 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946720 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946734 5116 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946748 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946763 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946778 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946793 5116 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946807 5116 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946822 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946878 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946897 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946915 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946930 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946943 5116 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946957 5116 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946972 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.946987 5116 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947001 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947015 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947029 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947043 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947058 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947073 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947089 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947109 5116 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947125 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947138 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947151 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947182 5116 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947196 5116 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947216 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947229 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947293 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947310 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947323 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947334 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947348 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947360 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947374 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947387 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947399 5116 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947414 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947431 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947447 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947466 5116 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947479 5116 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947493 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947506 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947518 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947533 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947546 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947562 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947608 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947620 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947634 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947649 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947662 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947676 5116 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947689 5116 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947703 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947714 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947727 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947739 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947752 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947767 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947779 5116 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947791 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947802 5116 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947814 5116 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947826 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947840 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947853 5116 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947864 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947876 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947887 5116 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947901 5116 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947912 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947927 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947940 5116 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947954 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947965 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947977 5116 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.947990 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948002 5116 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948016 5116 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948029 5116 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948041 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948055 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948067 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948079 5116 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948094 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948106 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948118 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948130 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948144 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948156 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948168 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948180 5116 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948210 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948223 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948235 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948267 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948279 5116 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948291 5116 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948304 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948316 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948354 5116 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948368 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948381 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948394 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948408 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948684 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84b46b92-c78c-44c8-a27b-4a20c47acd75-cni-binary-copy\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948720 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.948838 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.949677 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.950699 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.950888 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.951275 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.951566 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.951649 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.951757 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890a83de-002d-49ac-9bcd-c3c2789f3d8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3e46e220f9815e7df4df57b514f2fb4af572450909f0660c53ba1e6ce4fe6184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9c539925081b7d7490d696aa00ab3e03458779194511381300358de9c8f210e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c539925081b7d7490d696aa00ab3e03458779194511381300358de9c8f210e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.951878 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.952098 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.954391 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.954508 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.958992 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-btkvm\" (UniqueName: \"kubernetes.io/projected/84b46b92-c78c-44c8-a27b-4a20c47acd75-kube-api-access-btkvm\") pod \"multus-8wqqf\" (UID: \"84b46b92-c78c-44c8-a27b-4a20c47acd75\") " pod="openshift-multus/multus-8wqqf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.959667 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.960469 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.960585 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.960585 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.963237 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.963564 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.963980 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.964472 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.964528 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.964546 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.964566 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.964581 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:32Z","lastTransitionTime":"2025-12-08T17:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.965266 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.965668 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.965766 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.966037 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.967033 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.967348 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.967474 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.967792 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.969546 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv9rs\" (UniqueName: \"kubernetes.io/projected/4f09ae7f-7717-4477-b860-d6bc280c6fd6-kube-api-access-hv9rs\") pod \"multus-additional-cni-plugins-p56xf\" (UID: \"4f09ae7f-7717-4477-b860-d6bc280c6fd6\") " pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.977218 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.980384 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.981275 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f09ae7f-7717-4477-b860-d6bc280c6fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p56xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.991090 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbpbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbpbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:43:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-frh5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.993051 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p56xf" Dec 08 17:43:32 crc kubenswrapper[5116]: I1208 17:43:32.996921 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.003023 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.007197 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8wqqf" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.012402 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ed783b-2e6e-4218-b6ec-4a36f2cebb4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54d79ff7eec6f4ead3294642b73b205817ba6fecb95ebbfe18dc837032da4190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://77855c3d2ec7900a079f960c0fb121f1cf87da7a90d1b3e563cf15d5db2ad29f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://519be5a3f70ddad33a701b7283712b064c2eeda9e71e6e98a33cd934edbdbefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://58ae272ceccab459d261709944f9cce6bc15753c8afbe5a7d76acf41c0dc07ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://12553f525990c8f10d5357f2d06f3e8a9a83d324a2629d8772ec479cfe410c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f6461331454a1612dae010fcda3b49f7c7ae256bc2b784c21063bc4f31e4bd5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6461331454a1612dae010fcda3b49f7c7ae256bc2b784c21063bc4f31e4bd5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://490524dfacd789a249b0f07c13ad745790870af9c8d7579ce12ad7b9678c8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://490524dfacd789a249b0f07c13ad745790870af9c8d7579ce12ad7b9678c8d33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://46910f8133758763cbecc09f8b15ef4116e4c7931efbcddf33175d59e1d98007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46910f8133758763cbecc09f8b15ef4116e4c7931efbcddf33175d59e1d98007\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:42:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:33 crc kubenswrapper[5116]: W1208 17:43:33.025263 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84b46b92_c78c_44c8_a27b_4a20c47acd75.slice/crio-880aa3a662d7a6d21b65195e1c4561dfc21d71aee68b84357a78d1579cdafc7b WatchSource:0}: Error finding container 880aa3a662d7a6d21b65195e1c4561dfc21d71aee68b84357a78d1579cdafc7b: Status 404 returned error can't find the container with id 880aa3a662d7a6d21b65195e1c4561dfc21d71aee68b84357a78d1579cdafc7b Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.048758 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-etc-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.048866 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.048888 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-etc-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.048921 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.048897 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e9b8c7c0-e0b8-44ea-adc9-41342c754061-tmp-dir\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.048976 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-netd\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.048996 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-config\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049033 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049042 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-netd\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049064 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kflvf\" (UniqueName: \"kubernetes.io/projected/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-kube-api-access-kflvf\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049086 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-node-log\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049110 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-script-lib\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049137 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049163 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8qx55\" (UniqueName: \"kubernetes.io/projected/e9b8c7c0-e0b8-44ea-adc9-41342c754061-kube-api-access-8qx55\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049178 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e9b8c7c0-e0b8-44ea-adc9-41342c754061-tmp-dir\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049185 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-systemd-units\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049213 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9wf\" (UniqueName: \"kubernetes.io/projected/17cf2230-8798-4fb0-b89b-43901121fd07-kube-api-access-8s9wf\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049298 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e9b8c7c0-e0b8-44ea-adc9-41342c754061-hosts-file\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049325 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-kubelet\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049350 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049391 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk87\" (UniqueName: \"kubernetes.io/projected/19151390-7d67-4ae9-8520-ae20b8eb46f8-kube-api-access-2fk87\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049449 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-systemd\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049473 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-host\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049494 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-env-overrides\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049518 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-mcd-auth-proxy-config\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049548 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-slash\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049576 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-log-socket\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049598 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-bin\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049621 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-rootfs\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049626 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-node-log\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049641 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbpbq\" (UniqueName: \"kubernetes.io/projected/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-kube-api-access-lbpbq\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049667 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-serviceca\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049688 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-var-lib-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049712 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-ovn\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049734 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-proxy-tls\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049757 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvhd\" (UniqueName: \"kubernetes.io/projected/2e7c2199-9693-42b9-9431-2b12b5abe1d1-kube-api-access-vmvhd\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049796 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049821 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-netns\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049842 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049864 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-script-lib\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049876 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17cf2230-8798-4fb0-b89b-43901121fd07-ovn-node-metrics-cert\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049915 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-ovn-kubernetes\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049964 5116 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049978 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.049992 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050003 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050013 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050022 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050031 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050041 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050045 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-systemd\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050049 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050072 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050075 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-ovn-kubernetes\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050083 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050258 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050285 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-var-lib-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050295 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-slash\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050308 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-log-socket\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050345 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-host\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050352 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-bin\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050392 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-rootfs\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050476 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-config\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050667 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-env-overrides\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050781 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e9b8c7c0-e0b8-44ea-adc9-41342c754061-hosts-file\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.050801 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-ovn\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051014 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-netns\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051179 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-kubelet\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.051257 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.051301 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs podName:19151390-7d67-4ae9-8520-ae20b8eb46f8 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:33.551289955 +0000 UTC m=+83.348413189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs") pod "network-metrics-daemon-5ft89" (UID: "19151390-7d67-4ae9-8520-ae20b8eb46f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051429 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-openvswitch\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051453 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051457 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051464 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051520 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051534 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051545 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051721 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.051735 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.052381 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.052402 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.052414 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.052474 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.052495 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.052546 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.052991 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-mcd-auth-proxy-config\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053187 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-systemd-units\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053312 5116 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053340 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053357 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053405 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053421 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053438 5116 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053453 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.053499 5116 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.054961 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-serviceca\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.062265 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.064604 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-proxy-tls\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.065822 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17cf2230-8798-4fb0-b89b-43901121fd07-ovn-node-metrics-cert\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.067925 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflvf\" (UniqueName: \"kubernetes.io/projected/51c251ea-ee75-4ef8-be21-e45ffbd0c2b3-kube-api-access-kflvf\") pod \"node-ca-ps59m\" (UID: \"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3\") " pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.067933 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbpbq\" (UniqueName: \"kubernetes.io/projected/f2e88345-fa91-4bb3-bd9d-a89a8293bffe-kube-api-access-lbpbq\") pod \"machine-config-daemon-frh5r\" (UID: \"f2e88345-fa91-4bb3-bd9d-a89a8293bffe\") " pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.068306 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qx55\" (UniqueName: \"kubernetes.io/projected/e9b8c7c0-e0b8-44ea-adc9-41342c754061-kube-api-access-8qx55\") pod \"node-resolver-5phkw\" (UID: \"e9b8c7c0-e0b8-44ea-adc9-41342c754061\") " pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.068813 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk87\" (UniqueName: \"kubernetes.io/projected/19151390-7d67-4ae9-8520-ae20b8eb46f8-kube-api-access-2fk87\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.071329 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvhd\" (UniqueName: \"kubernetes.io/projected/2e7c2199-9693-42b9-9431-2b12b5abe1d1-kube-api-access-vmvhd\") pod \"ovnkube-control-plane-57b78d8988-47dgv\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.071795 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9wf\" (UniqueName: \"kubernetes.io/projected/17cf2230-8798-4fb0-b89b-43901121fd07-kube-api-access-8s9wf\") pod \"ovnkube-node-zm56h\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.075182 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.075224 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.075237 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.075276 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.075290 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.076098 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8wqqf" event={"ID":"84b46b92-c78c-44c8-a27b-4a20c47acd75","Type":"ContainerStarted","Data":"880aa3a662d7a6d21b65195e1c4561dfc21d71aee68b84357a78d1579cdafc7b"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.083010 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerStarted","Data":"84a9dfb86d850474eb69c39d40eaebfbd870990627520dc00add86e0ac8a92d4"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.086758 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"7733e7b4f69f8544b7093ba48c90eaeac93baf45776bdcd46e454a67528607f3"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.089999 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"21dc7309d9165686de1f96a59df777e93da20e7fd3adeb2b98ab6ed27f68dac9"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.093522 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"042bcc47796d538d98608490495bbce339029f3c60eef2202d30d2e546288a18"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.183011 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.183074 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.183090 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.183114 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.183126 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.286015 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.286085 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.286099 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.286121 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.286134 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.313744 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5phkw" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.320159 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.327173 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ps59m" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.334108 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.339365 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:43:33 crc kubenswrapper[5116]: W1208 17:43:33.359704 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17cf2230_8798_4fb0_b89b_43901121fd07.slice/crio-395cd986e343d46252d6527e53b3a1cd2edbe59586cfa99d5c32d10497c03295 WatchSource:0}: Error finding container 395cd986e343d46252d6527e53b3a1cd2edbe59586cfa99d5c32d10497c03295: Status 404 returned error can't find the container with id 395cd986e343d46252d6527e53b3a1cd2edbe59586cfa99d5c32d10497c03295 Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.366566 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.366640 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.366718 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.366758 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368338 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368465 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368506 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368520 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368563 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368477 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:34.36844832 +0000 UTC m=+84.165571554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368617 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:34.368596724 +0000 UTC m=+84.165719958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368683 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368724 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368780 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.368938 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:34.368874871 +0000 UTC m=+84.165998115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.372512 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:34.372481945 +0000 UTC m=+84.169605179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: W1208 17:43:33.415865 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c251ea_ee75_4ef8_be21_e45ffbd0c2b3.slice/crio-1aa00ce41f51ff9f356ae817493b7488fe48e0e3e416735ebe4b5b774565a78a WatchSource:0}: Error finding container 1aa00ce41f51ff9f356ae817493b7488fe48e0e3e416735ebe4b5b774565a78a: Status 404 returned error can't find the container with id 1aa00ce41f51ff9f356ae817493b7488fe48e0e3e416735ebe4b5b774565a78a Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.416978 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.417033 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.417047 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.417070 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.417086 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:33 crc kubenswrapper[5116]: W1208 17:43:33.461903 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e7c2199_9693_42b9_9431_2b12b5abe1d1.slice/crio-229ee2edfaf5abf4c8ad8eac873cdef03d0b78f04eea7094967a5d97365974ae WatchSource:0}: Error finding container 229ee2edfaf5abf4c8ad8eac873cdef03d0b78f04eea7094967a5d97365974ae: Status 404 returned error can't find the container with id 229ee2edfaf5abf4c8ad8eac873cdef03d0b78f04eea7094967a5d97365974ae Dec 08 17:43:33 crc kubenswrapper[5116]: W1208 17:43:33.462943 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e88345_fa91_4bb3_bd9d_a89a8293bffe.slice/crio-0731fab192e825df5742a1c6407ab3655715d1b1e8f5e86d4e91890caeebc020 WatchSource:0}: Error finding container 0731fab192e825df5742a1c6407ab3655715d1b1e8f5e86d4e91890caeebc020: Status 404 returned error can't find the container with id 0731fab192e825df5742a1c6407ab3655715d1b1e8f5e86d4e91890caeebc020 Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.570802 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.571295 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.571457 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.571522 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs podName:19151390-7d67-4ae9-8520-ae20b8eb46f8 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:34.571506472 +0000 UTC m=+84.368629696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs") pod "network-metrics-daemon-5ft89" (UID: "19151390-7d67-4ae9-8520-ae20b8eb46f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:33 crc kubenswrapper[5116]: E1208 17:43:33.571823 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:34.57181489 +0000 UTC m=+84.368938124 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.573827 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.573859 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.573879 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.573894 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.573903 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.737587 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.737656 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.737677 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.737694 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.737708 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.847609 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.847707 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.847759 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.847780 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.847791 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.949640 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.949678 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.949690 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.949706 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:33 crc kubenswrapper[5116]: I1208 17:43:33.949717 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:33Z","lastTransitionTime":"2025-12-08T17:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.052616 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.052671 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.052685 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.052701 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.052713 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.098035 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="1fbed6896daac43c71c23a2cd13d426172358ba9b9b9199189fb01846868e0ba" exitCode=0 Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.098148 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"1fbed6896daac43c71c23a2cd13d426172358ba9b9b9199189fb01846868e0ba"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.098200 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"395cd986e343d46252d6527e53b3a1cd2edbe59586cfa99d5c32d10497c03295"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.099846 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8wqqf" event={"ID":"84b46b92-c78c-44c8-a27b-4a20c47acd75","Type":"ContainerStarted","Data":"44ea695962c16bd4fd8ec8a0d9643b6428845ee38438b9ab3c2ae7068995d383"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.102700 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"0731fab192e825df5742a1c6407ab3655715d1b1e8f5e86d4e91890caeebc020"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.104150 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ps59m" event={"ID":"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3","Type":"ContainerStarted","Data":"1aa00ce41f51ff9f356ae817493b7488fe48e0e3e416735ebe4b5b774565a78a"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.114342 5116 generic.go:358] "Generic (PLEG): container finished" podID="4f09ae7f-7717-4477-b860-d6bc280c6fd6" containerID="b0c49fdf94f323db472f8a8a7bc51e61cccaef62804edf8a8306424dd30b98fe" exitCode=0 Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.114450 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerDied","Data":"b0c49fdf94f323db472f8a8a7bc51e61cccaef62804edf8a8306424dd30b98fe"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.116583 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"78eda52a8198c5351181bb6709f616041f6a671d54720d28d41a85920f87e4a9"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.117550 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5phkw" event={"ID":"e9b8c7c0-e0b8-44ea-adc9-41342c754061","Type":"ContainerStarted","Data":"ed73adca7430597ec0f5d5b4ac147b1ec79208d7f89ec76606ffd8da887739ce"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.119481 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"517acd6662ec3da349f05a5a4b282dc7dbde4070b0fe312560aa15563f92e590"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.119511 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"824cef4f83f6707f68b8f462bdf51798f01558ee1147a8acd1d77bb0a0c2e4b2"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.120850 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.121662 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" event={"ID":"2e7c2199-9693-42b9-9431-2b12b5abe1d1","Type":"ContainerStarted","Data":"229ee2edfaf5abf4c8ad8eac873cdef03d0b78f04eea7094967a5d97365974ae"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.131453 5116 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:43:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.155060 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.155111 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.155124 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.155141 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.155153 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.259627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.259708 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.259726 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.259769 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.259783 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.315770 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.315746636 podStartE2EDuration="2.315746636s" podCreationTimestamp="2025-12-08 17:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:34.315465599 +0000 UTC m=+84.112588833" watchObservedRunningTime="2025-12-08 17:43:34.315746636 +0000 UTC m=+84.112869870" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.354683 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.354668631 podStartE2EDuration="3.354668631s" podCreationTimestamp="2025-12-08 17:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:34.354398734 +0000 UTC m=+84.151521978" watchObservedRunningTime="2025-12-08 17:43:34.354668631 +0000 UTC m=+84.151791855" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.363591 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.363645 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.363660 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.363679 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.363691 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.451921 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.451969 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.452036 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.452071 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452080 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452093 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452104 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452125 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452163 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:36.452148921 +0000 UTC m=+86.249272155 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452195 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452205 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452213 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452269 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:36.452170022 +0000 UTC m=+86.249293256 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452271 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452282 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:36.452276495 +0000 UTC m=+86.249399729 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.452300 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:36.452290315 +0000 UTC m=+86.249413549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.458137 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.458125287 podStartE2EDuration="2.458125287s" podCreationTimestamp="2025-12-08 17:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:34.442670724 +0000 UTC m=+84.239793978" watchObservedRunningTime="2025-12-08 17:43:34.458125287 +0000 UTC m=+84.255248521" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.471604 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.471646 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.471658 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.471676 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.471688 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.472843 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=3.47283285 podStartE2EDuration="3.47283285s" podCreationTimestamp="2025-12-08 17:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:34.45864729 +0000 UTC m=+84.255770524" watchObservedRunningTime="2025-12-08 17:43:34.47283285 +0000 UTC m=+84.269956084" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.574348 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.574395 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.574407 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.574426 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.574439 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.594455 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8wqqf" podStartSLOduration=63.594438239 podStartE2EDuration="1m3.594438239s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:34.593971247 +0000 UTC m=+84.391094501" watchObservedRunningTime="2025-12-08 17:43:34.594438239 +0000 UTC m=+84.391561473" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.654630 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.654863 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:36.654824692 +0000 UTC m=+86.451947926 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.655003 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.655148 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.655206 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs podName:19151390-7d67-4ae9-8520-ae20b8eb46f8 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:36.655198213 +0000 UTC m=+86.452321447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs") pod "network-metrics-daemon-5ft89" (UID: "19151390-7d67-4ae9-8520-ae20b8eb46f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.677489 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.677530 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.677541 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.677556 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.677566 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.679170 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.679346 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.679496 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.679298 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.679511 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.679898 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.679983 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:34 crc kubenswrapper[5116]: E1208 17:43:34.680074 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.684528 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.685968 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.687995 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.689730 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.692521 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.694908 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.700785 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.703886 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.704808 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.708099 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.709819 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.712490 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.713591 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.715609 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.716230 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.716997 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.719324 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.720694 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.722537 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.723786 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.725228 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.727346 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.728647 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.729653 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.731049 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.732026 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.733292 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.734313 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.736674 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.737764 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.739172 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.740808 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.742709 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.744034 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.745265 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.745933 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.747112 5116 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.747410 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.751071 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.752214 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.753487 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.755467 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.756579 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.757755 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.759366 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.759990 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.760913 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.762775 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.764136 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.765798 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.766940 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.768426 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.769688 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.771170 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.781996 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.782044 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.782058 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.782077 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.782089 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.789943 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.791755 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.793135 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.794067 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.885909 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.885949 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.885959 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.885974 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:34 crc kubenswrapper[5116]: I1208 17:43:34.885985 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:34Z","lastTransitionTime":"2025-12-08T17:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.026493 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.026545 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.026659 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.026691 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.026704 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.134141 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.134316 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.134342 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.134371 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.134390 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.141961 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerStarted","Data":"f35c347f0a9fc5d8c92e84504f38ba6c1a0b3b3dead625c219ebe736127e5cef"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.143318 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5phkw" event={"ID":"e9b8c7c0-e0b8-44ea-adc9-41342c754061","Type":"ContainerStarted","Data":"9119419fbf16c3b768553bad9e71bfdc32eece14eae48d0879ed531ec3ad0000"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.149433 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" event={"ID":"2e7c2199-9693-42b9-9431-2b12b5abe1d1","Type":"ContainerStarted","Data":"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.149554 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" event={"ID":"2e7c2199-9693-42b9-9431-2b12b5abe1d1","Type":"ContainerStarted","Data":"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.154455 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"bd3f2516ba42578f60aeff92565eb4eed9411fc7b0a498f5342dd7e9e4c0475c"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.154518 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"396bbb6d70fc2a226fa82c18e9fef2e42c88aab08db97f7b8253ac1fedf99524"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.154537 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"fb5c408faae317c65e7ecee5588f0724734d49d1b4a3ae27e669fed7d9f1d56f"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.165161 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"ab8c1dbe131eefd9e8c0aeb2c48a2f7eb05a03be21cd359a3b385522d4e1874b"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.165220 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"013afc9b2137a670c234a5ed56a7fe32904cb1f1413dc085edcb58fd24608faa"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.168865 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ps59m" event={"ID":"51c251ea-ee75-4ef8-be21-e45ffbd0c2b3","Type":"ContainerStarted","Data":"4bd4b9bf7dfbdbbc2c97fff5640eea7810490026c1852be1bb2e960b08da1060"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.209913 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5phkw" podStartSLOduration=64.209895718 podStartE2EDuration="1m4.209895718s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:35.189141457 +0000 UTC m=+84.986264711" watchObservedRunningTime="2025-12-08 17:43:35.209895718 +0000 UTC m=+85.007018952" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.226989 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ps59m" podStartSLOduration=64.226918982 podStartE2EDuration="1m4.226918982s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:35.20882498 +0000 UTC m=+85.005948224" watchObservedRunningTime="2025-12-08 17:43:35.226918982 +0000 UTC m=+85.024042216" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.238125 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.238191 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.238203 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.238226 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.238256 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.244987 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" podStartSLOduration=63.244973302 podStartE2EDuration="1m3.244973302s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:35.226770398 +0000 UTC m=+85.023893632" watchObservedRunningTime="2025-12-08 17:43:35.244973302 +0000 UTC m=+85.042096536" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.245264 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podStartSLOduration=64.245260739 podStartE2EDuration="1m4.245260739s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:35.244197032 +0000 UTC m=+85.041320266" watchObservedRunningTime="2025-12-08 17:43:35.245260739 +0000 UTC m=+85.042383973" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.340613 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.340979 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.340991 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.341005 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.341014 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.443617 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.443659 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.443671 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.443688 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.443700 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.547221 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.547627 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.547641 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.547659 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.547670 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.756184 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.756229 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.756272 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.756299 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.756310 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.879846 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.879887 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.879899 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.879917 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.879929 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.981563 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.981632 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.981647 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.981668 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:35 crc kubenswrapper[5116]: I1208 17:43:35.981688 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:35Z","lastTransitionTime":"2025-12-08T17:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.084048 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.084087 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.084095 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.084110 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.084119 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.180403 5116 generic.go:358] "Generic (PLEG): container finished" podID="4f09ae7f-7717-4477-b860-d6bc280c6fd6" containerID="f35c347f0a9fc5d8c92e84504f38ba6c1a0b3b3dead625c219ebe736127e5cef" exitCode=0 Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.180506 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerDied","Data":"f35c347f0a9fc5d8c92e84504f38ba6c1a0b3b3dead625c219ebe736127e5cef"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.185803 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.186126 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.186146 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.186167 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.186184 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.187788 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"c2d65cd5cbd25ba2aa8ee1ee5d3ee19de672253be1241f5dd6272ffbbcf572b9"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.187828 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"df74991b9351b83a6afafbbed676c14a19d840f12be07cefd14b14577801ad8e"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.296020 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.296081 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.296096 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.296116 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.296130 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.399012 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.399053 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.399062 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.399076 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.399088 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.490195 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.490583 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.490705 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.490779 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:40.490743947 +0000 UTC m=+90.287867381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.490869 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.490894 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.490908 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.490925 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.490992 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:40.490956722 +0000 UTC m=+90.288080156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.491045 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.491059 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.491087 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.491110 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.491154 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:40.491143817 +0000 UTC m=+90.288267271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.491195 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.491267 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:40.491232009 +0000 UTC m=+90.288355463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.501835 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.501890 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.501901 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.501936 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.501952 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.604425 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.604808 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.604822 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.604848 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.604861 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.679658 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.679685 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.679864 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.679697 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.679907 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.679988 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.680375 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.680484 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.693746 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.693857 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.694041 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.694154 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:40.694119866 +0000 UTC m=+90.491243100 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:36 crc kubenswrapper[5116]: E1208 17:43:36.694260 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs podName:19151390-7d67-4ae9-8520-ae20b8eb46f8 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:40.694222929 +0000 UTC m=+90.491346163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs") pod "network-metrics-daemon-5ft89" (UID: "19151390-7d67-4ae9-8520-ae20b8eb46f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.707406 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.707450 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.707460 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.707475 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.707485 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.815675 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.815721 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.815731 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.815746 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.815756 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.918117 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.918175 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.918185 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.918208 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:36 crc kubenswrapper[5116]: I1208 17:43:36.918220 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:36Z","lastTransitionTime":"2025-12-08T17:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.020749 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.020830 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.020844 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.020866 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.020888 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.123847 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.123916 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.123930 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.123958 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.123972 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.195854 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"8f9ddf9b40be2523a293c7a25dcd093d1064c0ea5ac00cfcab147d4e52c1b577"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.198846 5116 generic.go:358] "Generic (PLEG): container finished" podID="4f09ae7f-7717-4477-b860-d6bc280c6fd6" containerID="24dfaf4cafe76169a969534c46092d57ecbc2ac836070b8f849b9a8131f59131" exitCode=0 Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.198893 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerDied","Data":"24dfaf4cafe76169a969534c46092d57ecbc2ac836070b8f849b9a8131f59131"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.227856 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.227919 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.227936 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.227959 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.227974 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.330343 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.330398 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.330408 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.330423 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.330432 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.432413 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.432479 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.432498 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.432523 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.432541 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.535376 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.535431 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.535442 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.535457 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.535467 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.690392 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.690437 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.690449 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.690466 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.690477 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.792880 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.792934 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.792949 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.792967 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.792978 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.895980 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.896046 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.896063 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.896085 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.896099 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.998301 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.998363 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.998382 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.998401 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:37 crc kubenswrapper[5116]: I1208 17:43:37.998413 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:37Z","lastTransitionTime":"2025-12-08T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.100293 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.100337 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.100348 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.100366 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.100378 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.201961 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.202065 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.202094 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.202132 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.202161 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.209391 5116 generic.go:358] "Generic (PLEG): container finished" podID="4f09ae7f-7717-4477-b860-d6bc280c6fd6" containerID="961468c5a099e38991e4ea11db6bdfaaa4be7f69fb20d6f1d405bfe1a236b583" exitCode=0 Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.209492 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerDied","Data":"961468c5a099e38991e4ea11db6bdfaaa4be7f69fb20d6f1d405bfe1a236b583"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.212568 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"6b26a0f40acccdf55bad53581fc5caad200b6bcd8117c8d1f0e4896a50462e46"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.308670 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.309619 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.309640 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.309657 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.309668 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.411749 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.411800 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.411811 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.411830 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.411843 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.514200 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.514301 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.514322 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.514340 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.514351 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.617034 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.617102 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.617114 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.617134 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.617148 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.680386 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.680407 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:38 crc kubenswrapper[5116]: E1208 17:43:38.680609 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:38 crc kubenswrapper[5116]: E1208 17:43:38.680508 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.680657 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:38 crc kubenswrapper[5116]: E1208 17:43:38.680718 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.680747 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:38 crc kubenswrapper[5116]: E1208 17:43:38.680803 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.718868 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.718902 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.718912 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.718940 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.718959 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.770278 5116 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.820538 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.820587 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.820598 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.820611 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.820620 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.940442 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.940530 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.940549 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.940600 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:38 crc kubenswrapper[5116]: I1208 17:43:38.940616 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:38Z","lastTransitionTime":"2025-12-08T17:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.042972 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.043039 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.043052 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.043071 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.043085 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:39Z","lastTransitionTime":"2025-12-08T17:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.094439 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.094484 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.094496 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.094510 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.094519 5116 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:43:39Z","lastTransitionTime":"2025-12-08T17:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.148673 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l"] Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.330351 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"d43248e58f8ef79a4ca47051d7abc1ebda6dfe4b3a3894c0a42cf2eadd863a40"} Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.330419 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerStarted","Data":"25bd65d70845f6ed91ccf77e250ab95dc67cac6344dc221f4374af4ac2552aed"} Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.331267 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.334190 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.334322 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.334334 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.334438 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.489742 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dee0bf47-a8b4-4362-b361-7c23e2199eff-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.489823 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dee0bf47-a8b4-4362-b361-7c23e2199eff-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.490143 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dee0bf47-a8b4-4362-b361-7c23e2199eff-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.490201 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dee0bf47-a8b4-4362-b361-7c23e2199eff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.490556 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dee0bf47-a8b4-4362-b361-7c23e2199eff-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.591628 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dee0bf47-a8b4-4362-b361-7c23e2199eff-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.591701 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dee0bf47-a8b4-4362-b361-7c23e2199eff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.591784 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dee0bf47-a8b4-4362-b361-7c23e2199eff-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.591807 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dee0bf47-a8b4-4362-b361-7c23e2199eff-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.591796 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dee0bf47-a8b4-4362-b361-7c23e2199eff-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.591942 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dee0bf47-a8b4-4362-b361-7c23e2199eff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.592040 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dee0bf47-a8b4-4362-b361-7c23e2199eff-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.595137 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dee0bf47-a8b4-4362-b361-7c23e2199eff-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.638692 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.647609 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.650347 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dee0bf47-a8b4-4362-b361-7c23e2199eff-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.652650 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dee0bf47-a8b4-4362-b361-7c23e2199eff-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5jt8l\" (UID: \"dee0bf47-a8b4-4362-b361-7c23e2199eff\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:39 crc kubenswrapper[5116]: I1208 17:43:39.978009 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" Dec 08 17:43:40 crc kubenswrapper[5116]: W1208 17:43:40.057928 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddee0bf47_a8b4_4362_b361_7c23e2199eff.slice/crio-9ae97690c5c3069ba827b6648952a0b0de1416dbfa2b6a60ec625949eb4b709c WatchSource:0}: Error finding container 9ae97690c5c3069ba827b6648952a0b0de1416dbfa2b6a60ec625949eb4b709c: Status 404 returned error can't find the container with id 9ae97690c5c3069ba827b6648952a0b0de1416dbfa2b6a60ec625949eb4b709c Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.354280 5116 generic.go:358] "Generic (PLEG): container finished" podID="4f09ae7f-7717-4477-b860-d6bc280c6fd6" containerID="25bd65d70845f6ed91ccf77e250ab95dc67cac6344dc221f4374af4ac2552aed" exitCode=0 Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.354441 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerDied","Data":"25bd65d70845f6ed91ccf77e250ab95dc67cac6344dc221f4374af4ac2552aed"} Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.356567 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" event={"ID":"dee0bf47-a8b4-4362-b361-7c23e2199eff","Type":"ContainerStarted","Data":"cc9ffd4e368e7372d99d8e77ebcb4146a3e232ea5e10696f52ae84ea85ab02e6"} Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.356627 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" event={"ID":"dee0bf47-a8b4-4362-b361-7c23e2199eff","Type":"ContainerStarted","Data":"9ae97690c5c3069ba827b6648952a0b0de1416dbfa2b6a60ec625949eb4b709c"} Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.392411 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5jt8l" podStartSLOduration=69.392387522 podStartE2EDuration="1m9.392387522s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:40.392229798 +0000 UTC m=+90.189353042" watchObservedRunningTime="2025-12-08 17:43:40.392387522 +0000 UTC m=+90.189510756" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.550200 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.550308 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.550349 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.550378 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550543 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550553 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550603 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550664 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550681 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550603 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550801 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550627 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:48.550610405 +0000 UTC m=+98.347733639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550809 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550853 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:48.55082319 +0000 UTC m=+98.347946424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550876 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:48.550867752 +0000 UTC m=+98.347990986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.550893 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:48.550885782 +0000 UTC m=+98.348009016 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.683217 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.683401 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.683755 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.683815 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.683851 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.683895 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.683917 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.683987 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.753372 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.753590 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:48.753561964 +0000 UTC m=+98.550685198 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:40 crc kubenswrapper[5116]: I1208 17:43:40.753962 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.754094 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:40 crc kubenswrapper[5116]: E1208 17:43:40.754151 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs podName:19151390-7d67-4ae9-8520-ae20b8eb46f8 nodeName:}" failed. No retries permitted until 2025-12-08 17:43:48.75413642 +0000 UTC m=+98.551259654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs") pod "network-metrics-daemon-5ft89" (UID: "19151390-7d67-4ae9-8520-ae20b8eb46f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.364656 5116 generic.go:358] "Generic (PLEG): container finished" podID="4f09ae7f-7717-4477-b860-d6bc280c6fd6" containerID="7a3e4b18910463177db7ef32a7a8705933b67dd242a70644e683ddaf0b133acc" exitCode=0 Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.364875 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerDied","Data":"7a3e4b18910463177db7ef32a7a8705933b67dd242a70644e683ddaf0b133acc"} Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.379278 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerStarted","Data":"0165060b7c7c730bc40c1f8e6a0e75452412dc4249378fb9fd54d4cfd49b82d6"} Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.380287 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.380371 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.380389 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.429998 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podStartSLOduration=70.429978141 podStartE2EDuration="1m10.429978141s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:41.429092049 +0000 UTC m=+91.226215283" watchObservedRunningTime="2025-12-08 17:43:41.429978141 +0000 UTC m=+91.227101375" Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.435350 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:41 crc kubenswrapper[5116]: I1208 17:43:41.447389 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:43:42 crc kubenswrapper[5116]: I1208 17:43:42.387110 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p56xf" event={"ID":"4f09ae7f-7717-4477-b860-d6bc280c6fd6","Type":"ContainerStarted","Data":"c51c4fb91401534f80526d6bd6164d31bd33b3c74c1e3869904345bf5b6a173f"} Dec 08 17:43:42 crc kubenswrapper[5116]: I1208 17:43:42.407253 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-p56xf" podStartSLOduration=71.407220398 podStartE2EDuration="1m11.407220398s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:42.406888659 +0000 UTC m=+92.204011893" watchObservedRunningTime="2025-12-08 17:43:42.407220398 +0000 UTC m=+92.204343632" Dec 08 17:43:42 crc kubenswrapper[5116]: I1208 17:43:42.694505 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:42 crc kubenswrapper[5116]: I1208 17:43:42.694649 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:42 crc kubenswrapper[5116]: E1208 17:43:42.694661 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:42 crc kubenswrapper[5116]: I1208 17:43:42.694766 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:42 crc kubenswrapper[5116]: I1208 17:43:42.694801 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:42 crc kubenswrapper[5116]: E1208 17:43:42.694958 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:42 crc kubenswrapper[5116]: E1208 17:43:42.695038 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:42 crc kubenswrapper[5116]: E1208 17:43:42.695105 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:43 crc kubenswrapper[5116]: I1208 17:43:43.920365 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5ft89"] Dec 08 17:43:43 crc kubenswrapper[5116]: I1208 17:43:43.920630 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:43 crc kubenswrapper[5116]: E1208 17:43:43.920751 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:44 crc kubenswrapper[5116]: I1208 17:43:44.681542 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:44 crc kubenswrapper[5116]: E1208 17:43:44.682079 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:44 crc kubenswrapper[5116]: I1208 17:43:44.682563 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:44 crc kubenswrapper[5116]: E1208 17:43:44.682621 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:44 crc kubenswrapper[5116]: I1208 17:43:44.682695 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:44 crc kubenswrapper[5116]: E1208 17:43:44.682743 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:45 crc kubenswrapper[5116]: I1208 17:43:45.679703 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:45 crc kubenswrapper[5116]: E1208 17:43:45.679891 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:45 crc kubenswrapper[5116]: I1208 17:43:45.680805 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:43:45 crc kubenswrapper[5116]: E1208 17:43:45.681100 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:46 crc kubenswrapper[5116]: I1208 17:43:46.679709 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:46 crc kubenswrapper[5116]: E1208 17:43:46.679891 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:43:46 crc kubenswrapper[5116]: I1208 17:43:46.680407 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:46 crc kubenswrapper[5116]: E1208 17:43:46.680466 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:43:46 crc kubenswrapper[5116]: I1208 17:43:46.680548 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:46 crc kubenswrapper[5116]: E1208 17:43:46.680629 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:43:47 crc kubenswrapper[5116]: I1208 17:43:47.680094 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:47 crc kubenswrapper[5116]: E1208 17:43:47.680422 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft89" podUID="19151390-7d67-4ae9-8520-ae20b8eb46f8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.041942 5116 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.042144 5116 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.082759 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-99grc"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.094100 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-b2n2w"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.101041 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.104132 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.104294 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.105512 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.105552 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.106338 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.107637 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.120443 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.121106 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.123844 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.124840 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-x825q"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.125463 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.127271 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.127641 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-8h884"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.127691 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.127985 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.128355 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.128825 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.129950 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.130148 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.130318 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.130465 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.130604 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.130763 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.134844 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.135007 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.135468 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.139118 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-86sn8"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.139701 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.142827 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qnwj9"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.148326 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dx5gf"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.148602 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.151750 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.152128 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.152534 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.152648 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.153017 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.154709 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.155468 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.156123 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.161668 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162165 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162448 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162557 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-l4b2c"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162649 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162452 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162988 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.163022 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.163136 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162606 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.163684 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.162588 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.166166 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-wbzsx"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.168914 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169173 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169379 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169536 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.170830 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.171282 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169579 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169539 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.227799 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169584 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169630 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.170573 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169810 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.169693 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.232180 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.232813 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.232998 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb62p\" (UniqueName: \"kubernetes.io/projected/79b65775-2e2c-4bad-bf4b-b8c4893e6463-kube-api-access-xb62p\") pod \"cluster-samples-operator-6b564684c8-6rbxq\" (UID: \"79b65775-2e2c-4bad-bf4b-b8c4893e6463\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.233078 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.233155 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-encryption-config\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.233230 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-machine-approver-tls\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.233331 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-encryption-config\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.233411 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-audit-dir\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.233521 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dqw\" (UniqueName: \"kubernetes.io/projected/0267ec2e-8f60-4739-aae7-2a133c6f2809-kube-api-access-z6dqw\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.246782 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.247042 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-serving-cert\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.247126 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-config\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.247200 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79b65775-2e2c-4bad-bf4b-b8c4893e6463-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6rbxq\" (UID: \"79b65775-2e2c-4bad-bf4b-b8c4893e6463\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.250692 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.250943 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251045 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251140 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a30015c-60d9-4474-8417-731fd67ea187-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251260 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-config\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251330 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251352 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78rk\" (UniqueName: \"kubernetes.io/projected/71d475ea-b97a-489a-8c80-1a30614dccb5-kube-api-access-v78rk\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251470 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251525 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-99grc"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251536 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-86sn8"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251618 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-x825q"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251634 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-4msk8"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.253362 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.254455 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.258431 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.262913 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.263856 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.251487 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnfj\" (UniqueName: \"kubernetes.io/projected/3a30015c-60d9-4474-8417-731fd67ea187-kube-api-access-svnfj\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.269419 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.284408 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-etcd-client\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.284643 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5zlg\" (UniqueName: \"kubernetes.io/projected/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-kube-api-access-f5zlg\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.284997 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-etcd-serving-ca\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.285109 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-serving-cert\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.285962 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.286958 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.287489 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.287630 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-trusted-ca-bundle\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.287671 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.287696 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.287728 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.287840 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.288274 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a30015c-60d9-4474-8417-731fd67ea187-config\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.288325 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/71d475ea-b97a-489a-8c80-1a30614dccb5-available-featuregates\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.288358 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.288598 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.288888 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.288365 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-serving-cert\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.289612 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.289701 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-config\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.289741 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-dir\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.308023 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.308174 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.308200 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.308199 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.308265 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.308956 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309182 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309210 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309222 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309266 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqcp\" (UniqueName: \"kubernetes.io/projected/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-kube-api-access-9gqcp\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309290 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-client-ca\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309305 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-tmp\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309335 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-audit\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309351 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4x4h\" (UniqueName: \"kubernetes.io/projected/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-kube-api-access-x4x4h\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309367 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-policies\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309390 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309406 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309426 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-audit-policies\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309444 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a30015c-60d9-4474-8417-731fd67ea187-images\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309457 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-config\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309479 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-auth-proxy-config\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309495 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-node-pullsecrets\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309515 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ql7\" (UniqueName: \"kubernetes.io/projected/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-kube-api-access-b8ql7\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309528 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309546 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71d475ea-b97a-489a-8c80-1a30614dccb5-serving-cert\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309563 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309578 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309593 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309612 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xv8\" (UniqueName: \"kubernetes.io/projected/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-kube-api-access-d4xv8\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309629 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-etcd-client\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309645 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0267ec2e-8f60-4739-aae7-2a133c6f2809-audit-dir\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309658 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-image-import-ca\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309672 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-serving-cert\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309694 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.309708 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.311265 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.311497 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.311592 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.311715 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.313077 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.315595 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.315704 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.315859 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.315944 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.315963 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.316521 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.317178 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.317608 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.317688 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-wf75r"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.317817 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.317869 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.318818 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.318995 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.319169 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.319317 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.319602 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.319613 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.322727 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.323163 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.327864 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.330187 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.330828 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.336656 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.337974 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kt94l"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.338047 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.339160 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.340345 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.345983 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.346294 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.347134 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-9xp42"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.347796 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.347871 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.348066 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.348281 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.348459 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.356234 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.358424 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.358929 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.359180 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.361221 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-b2n2w"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.361282 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.361299 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.361509 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.361640 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.367163 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.367496 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.367741 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.369599 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w2582"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.369819 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.374776 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.374959 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.378707 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.378918 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.382799 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.384868 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.387362 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qsgps"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.388375 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.390449 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.390696 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.393185 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-hv2nc"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.393851 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.396848 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.397187 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.400050 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.400372 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.403674 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.405361 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9bt88"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.405498 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.409513 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.409697 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410430 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjlh\" (UniqueName: \"kubernetes.io/projected/d4f231ba-ace1-4242-a8cb-04e2904f95e9-kube-api-access-hrjlh\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410475 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94ce1afb-999a-45b5-847a-b3a71aa87c89-metrics-tls\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410513 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410542 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410566 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/94ce1afb-999a-45b5-847a-b3a71aa87c89-tmp-dir\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410590 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b69f20b-5284-4c2e-b147-abcc5441c977-tmp\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410624 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfsz6\" (UniqueName: \"kubernetes.io/projected/5b69f20b-5284-4c2e-b147-abcc5441c977-kube-api-access-hfsz6\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410725 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d73c8661-d51c-4d6e-a981-e186a3fc1964-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410753 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-client-ca\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410771 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-console-config\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410833 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5b69f20b-5284-4c2e-b147-abcc5441c977-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410854 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xb62p\" (UniqueName: \"kubernetes.io/projected/79b65775-2e2c-4bad-bf4b-b8c4893e6463-kube-api-access-xb62p\") pod \"cluster-samples-operator-6b564684c8-6rbxq\" (UID: \"79b65775-2e2c-4bad-bf4b-b8c4893e6463\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410877 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410899 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73c8661-d51c-4d6e-a981-e186a3fc1964-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410922 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8hkm\" (UniqueName: \"kubernetes.io/projected/d73c8661-d51c-4d6e-a981-e186a3fc1964-kube-api-access-d8hkm\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410949 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-encryption-config\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.410977 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg2t7\" (UniqueName: \"kubernetes.io/projected/776da53b-d740-421c-a867-43239bb9ebc6-kube-api-access-cg2t7\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411018 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-machine-approver-tls\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411038 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b69f20b-5284-4c2e-b147-abcc5441c977-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411068 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6973f5d-d174-4643-814d-e929acd898ba-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411118 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-encryption-config\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411139 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-audit-dir\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411172 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6973f5d-d174-4643-814d-e929acd898ba-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411355 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dqw\" (UniqueName: \"kubernetes.io/projected/0267ec2e-8f60-4739-aae7-2a133c6f2809-kube-api-access-z6dqw\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411375 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411393 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-serving-cert\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411411 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-config\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411434 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79b65775-2e2c-4bad-bf4b-b8c4893e6463-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6rbxq\" (UID: \"79b65775-2e2c-4bad-bf4b-b8c4893e6463\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411454 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f231ba-ace1-4242-a8cb-04e2904f95e9-serving-cert\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411480 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411505 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411528 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-config\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411543 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-trusted-ca-bundle\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.411573 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412095 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eaf2ae84-8492-41c0-b678-ab302371258a-console-oauth-config\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412150 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a30015c-60d9-4474-8417-731fd67ea187-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412172 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-config\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412200 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v78rk\" (UniqueName: \"kubernetes.io/projected/71d475ea-b97a-489a-8c80-1a30614dccb5-kube-api-access-v78rk\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412224 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svnfj\" (UniqueName: \"kubernetes.io/projected/3a30015c-60d9-4474-8417-731fd67ea187-kube-api-access-svnfj\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412277 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-etcd-client\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412298 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhpkw\" (UniqueName: \"kubernetes.io/projected/a4183c4d-f709-4d5b-a9a4-180284f37cc8-kube-api-access-bhpkw\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412321 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhzmj\" (UniqueName: \"kubernetes.io/projected/c60dd3fb-226c-4117-a898-4efde2c99ca8-kube-api-access-nhzmj\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412341 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d73c8661-d51c-4d6e-a981-e186a3fc1964-ready\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412381 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f5zlg\" (UniqueName: \"kubernetes.io/projected/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-kube-api-access-f5zlg\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412403 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-etcd-serving-ca\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412425 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-serving-cert\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412444 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-trusted-ca-bundle\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412469 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a30015c-60d9-4474-8417-731fd67ea187-config\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412493 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/71d475ea-b97a-489a-8c80-1a30614dccb5-available-featuregates\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412525 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-serving-cert\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412550 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412588 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-config\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412623 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-dir\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412644 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412665 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412686 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a4183c4d-f709-4d5b-a9a4-180284f37cc8-tmp\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412706 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltrx4\" (UniqueName: \"kubernetes.io/projected/5471dfd3-e36e-405a-a517-2c1e2bc10e62-kube-api-access-ltrx4\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412708 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412731 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9gqcp\" (UniqueName: \"kubernetes.io/projected/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-kube-api-access-9gqcp\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412747 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c60dd3fb-226c-4117-a898-4efde2c99ca8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412762 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60dd3fb-226c-4117-a898-4efde2c99ca8-config\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412777 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f231ba-ace1-4242-a8cb-04e2904f95e9-config\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412795 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp6nq\" (UniqueName: \"kubernetes.io/projected/d3dc86ab-217d-4d86-8381-16465ee204c8-kube-api-access-dp6nq\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412811 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/776da53b-d740-421c-a867-43239bb9ebc6-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412827 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxsxs\" (UniqueName: \"kubernetes.io/projected/e71c8014-5266-4483-8037-e8d9e7995c1b-kube-api-access-rxsxs\") pod \"downloads-747b44746d-4msk8\" (UID: \"e71c8014-5266-4483-8037-e8d9e7995c1b\") " pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412844 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/15fa3f1d-6230-4602-a46a-1f9b84a147fa-tmp-dir\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412861 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b69f20b-5284-4c2e-b147-abcc5441c977-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412890 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-client-ca\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412943 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-tmp\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.412980 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-audit\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413004 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x4x4h\" (UniqueName: \"kubernetes.io/projected/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-kube-api-access-x4x4h\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413026 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-policies\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413048 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-oauth-serving-cert\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413071 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413091 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15fa3f1d-6230-4602-a46a-1f9b84a147fa-kube-api-access\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413116 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413164 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413189 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5471dfd3-e36e-405a-a517-2c1e2bc10e62-tmp\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413207 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/776da53b-d740-421c-a867-43239bb9ebc6-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413227 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-audit-policies\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413263 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a30015c-60d9-4474-8417-731fd67ea187-images\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413281 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-config\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413297 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7j8q\" (UniqueName: \"kubernetes.io/projected/b6973f5d-d174-4643-814d-e929acd898ba-kube-api-access-j7j8q\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413312 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b69f20b-5284-4c2e-b147-abcc5441c977-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413338 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f231ba-ace1-4242-a8cb-04e2904f95e9-trusted-ca\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413359 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-auth-proxy-config\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413375 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-node-pullsecrets\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413383 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.414414 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.415625 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-audit-dir\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.416554 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-node-pullsecrets\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.413392 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776da53b-d740-421c-a867-43239bb9ebc6-config\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.417864 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-auth-proxy-config\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.418079 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-etcd-serving-ca\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.418588 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-trusted-ca-bundle\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.418785 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-audit\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.419106 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.419576 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/71d475ea-b97a-489a-8c80-1a30614dccb5-available-featuregates\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.419578 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-machine-approver-tls\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.419884 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-policies\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.420027 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a30015c-60d9-4474-8417-731fd67ea187-config\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.420082 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.420300 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-config\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.420521 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0267ec2e-8f60-4739-aae7-2a133c6f2809-audit-policies\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.420572 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-dir\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.420792 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-config\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.420796 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a30015c-60d9-4474-8417-731fd67ea187-images\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.421087 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-tmp\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.421358 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ql7\" (UniqueName: \"kubernetes.io/projected/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-kube-api-access-b8ql7\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.421476 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.421524 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-client-ca\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.421632 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.421790 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.421540 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-service-ca\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.422870 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-serving-cert\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.423137 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwsr\" (UniqueName: \"kubernetes.io/projected/94ce1afb-999a-45b5-847a-b3a71aa87c89-kube-api-access-lpwsr\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.423154 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-serving-cert\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.423189 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71d475ea-b97a-489a-8c80-1a30614dccb5-serving-cert\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.423437 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.424682 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-config\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425213 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425346 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskx6\" (UniqueName: \"kubernetes.io/projected/eaf2ae84-8492-41c0-b678-ab302371258a-kube-api-access-fskx6\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425391 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3dc86ab-217d-4d86-8381-16465ee204c8-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425425 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425450 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15fa3f1d-6230-4602-a46a-1f9b84a147fa-config\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425518 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425549 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425589 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4xv8\" (UniqueName: \"kubernetes.io/projected/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-kube-api-access-d4xv8\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425620 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4183c4d-f709-4d5b-a9a4-180284f37cc8-serving-cert\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425729 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15fa3f1d-6230-4602-a46a-1f9b84a147fa-serving-cert\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425803 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-etcd-client\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425903 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0267ec2e-8f60-4739-aae7-2a133c6f2809-audit-dir\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425947 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-image-import-ca\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.425980 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf2ae84-8492-41c0-b678-ab302371258a-console-serving-cert\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.426009 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3dc86ab-217d-4d86-8381-16465ee204c8-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.426078 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6973f5d-d174-4643-814d-e929acd898ba-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.426132 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-serving-cert\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.426711 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.427621 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0267ec2e-8f60-4739-aae7-2a133c6f2809-audit-dir\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.427864 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.428768 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.430991 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.431055 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.431235 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.433283 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71d475ea-b97a-489a-8c80-1a30614dccb5-serving-cert\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.434176 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-config\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.434649 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.438823 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-serving-cert\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.439292 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-etcd-client\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.439819 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-serving-cert\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.440063 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.440257 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.440484 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79b65775-2e2c-4bad-bf4b-b8c4893e6463-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6rbxq\" (UID: \"79b65775-2e2c-4bad-bf4b-b8c4893e6463\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.440922 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-etcd-client\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.441143 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.441701 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a30015c-60d9-4474-8417-731fd67ea187-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.443545 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.443952 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-encryption-config\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.444404 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.445395 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.445893 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0267ec2e-8f60-4739-aae7-2a133c6f2809-encryption-config\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.447877 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.448128 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-image-import-ca\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.453048 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.453313 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.461379 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-c642w"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.465444 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.465700 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.465867 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.468811 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.471195 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.472363 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.477486 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.477876 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.480616 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-wbzsx"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.480645 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.480656 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wp5hf"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.480755 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.483541 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.483760 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.483798 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dx5gf"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.483817 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.483937 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.486731 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-g68gs"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.486888 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.491236 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.491386 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494070 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494098 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494108 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-4msk8"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494116 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494125 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494134 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-l4b2c"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494143 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qnwj9"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494152 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494163 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494172 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-9xp42"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494183 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w2582"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494201 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kt94l"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494231 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494294 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-wf75r"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494313 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494370 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494398 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494411 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494425 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494437 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494450 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.494463 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdrb8"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.499500 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tlxxd"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.499759 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504401 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504849 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c642w"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504888 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tlxxd"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504901 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504916 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504927 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504945 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9bt88"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504958 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdrb8"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504976 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-g68gs"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.504988 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6"] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.505035 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.523938 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527110 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776da53b-d740-421c-a867-43239bb9ebc6-config\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527158 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-service-ca\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527220 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwsr\" (UniqueName: \"kubernetes.io/projected/94ce1afb-999a-45b5-847a-b3a71aa87c89-kube-api-access-lpwsr\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527263 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fskx6\" (UniqueName: \"kubernetes.io/projected/eaf2ae84-8492-41c0-b678-ab302371258a-kube-api-access-fskx6\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527349 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3dc86ab-217d-4d86-8381-16465ee204c8-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527404 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527436 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15fa3f1d-6230-4602-a46a-1f9b84a147fa-config\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527480 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4183c4d-f709-4d5b-a9a4-180284f37cc8-serving-cert\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527505 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15fa3f1d-6230-4602-a46a-1f9b84a147fa-serving-cert\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527545 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf2ae84-8492-41c0-b678-ab302371258a-console-serving-cert\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527570 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3dc86ab-217d-4d86-8381-16465ee204c8-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527598 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6973f5d-d174-4643-814d-e929acd898ba-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527715 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjlh\" (UniqueName: \"kubernetes.io/projected/d4f231ba-ace1-4242-a8cb-04e2904f95e9-kube-api-access-hrjlh\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527801 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94ce1afb-999a-45b5-847a-b3a71aa87c89-metrics-tls\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527879 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/94ce1afb-999a-45b5-847a-b3a71aa87c89-tmp-dir\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527922 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b69f20b-5284-4c2e-b147-abcc5441c977-tmp\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527950 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hfsz6\" (UniqueName: \"kubernetes.io/projected/5b69f20b-5284-4c2e-b147-abcc5441c977-kube-api-access-hfsz6\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.527982 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d73c8661-d51c-4d6e-a981-e186a3fc1964-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528013 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-client-ca\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528041 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-console-config\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528062 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5b69f20b-5284-4c2e-b147-abcc5441c977-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528122 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73c8661-d51c-4d6e-a981-e186a3fc1964-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528148 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8hkm\" (UniqueName: \"kubernetes.io/projected/d73c8661-d51c-4d6e-a981-e186a3fc1964-kube-api-access-d8hkm\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528332 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cg2t7\" (UniqueName: \"kubernetes.io/projected/776da53b-d740-421c-a867-43239bb9ebc6-kube-api-access-cg2t7\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528410 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/94ce1afb-999a-45b5-847a-b3a71aa87c89-tmp-dir\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528616 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73c8661-d51c-4d6e-a981-e186a3fc1964-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528627 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b69f20b-5284-4c2e-b147-abcc5441c977-tmp\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528875 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-service-ca\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.528959 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3dc86ab-217d-4d86-8381-16465ee204c8-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529018 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5b69f20b-5284-4c2e-b147-abcc5441c977-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529238 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-console-config\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529099 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b69f20b-5284-4c2e-b147-abcc5441c977-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529404 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6973f5d-d174-4643-814d-e929acd898ba-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529436 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6973f5d-d174-4643-814d-e929acd898ba-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529532 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-client-ca\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529546 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f231ba-ace1-4242-a8cb-04e2904f95e9-serving-cert\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529602 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-config\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529623 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-trusted-ca-bundle\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529692 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eaf2ae84-8492-41c0-b678-ab302371258a-console-oauth-config\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529798 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhpkw\" (UniqueName: \"kubernetes.io/projected/a4183c4d-f709-4d5b-a9a4-180284f37cc8-kube-api-access-bhpkw\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529827 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhzmj\" (UniqueName: \"kubernetes.io/projected/c60dd3fb-226c-4117-a898-4efde2c99ca8-kube-api-access-nhzmj\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529847 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d73c8661-d51c-4d6e-a981-e186a3fc1964-ready\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.529912 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a4183c4d-f709-4d5b-a9a4-180284f37cc8-tmp\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530014 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ltrx4\" (UniqueName: \"kubernetes.io/projected/5471dfd3-e36e-405a-a517-2c1e2bc10e62-kube-api-access-ltrx4\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530063 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c60dd3fb-226c-4117-a898-4efde2c99ca8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530091 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60dd3fb-226c-4117-a898-4efde2c99ca8-config\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530116 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f231ba-ace1-4242-a8cb-04e2904f95e9-config\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530140 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dp6nq\" (UniqueName: \"kubernetes.io/projected/d3dc86ab-217d-4d86-8381-16465ee204c8-kube-api-access-dp6nq\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530164 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/776da53b-d740-421c-a867-43239bb9ebc6-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530190 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rxsxs\" (UniqueName: \"kubernetes.io/projected/e71c8014-5266-4483-8037-e8d9e7995c1b-kube-api-access-rxsxs\") pod \"downloads-747b44746d-4msk8\" (UID: \"e71c8014-5266-4483-8037-e8d9e7995c1b\") " pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530227 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/15fa3f1d-6230-4602-a46a-1f9b84a147fa-tmp-dir\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530272 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b69f20b-5284-4c2e-b147-abcc5441c977-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530324 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-oauth-serving-cert\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530341 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a4183c4d-f709-4d5b-a9a4-180284f37cc8-tmp\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530350 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530380 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15fa3f1d-6230-4602-a46a-1f9b84a147fa-kube-api-access\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530407 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5471dfd3-e36e-405a-a517-2c1e2bc10e62-tmp\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530427 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/776da53b-d740-421c-a867-43239bb9ebc6-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530461 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j7j8q\" (UniqueName: \"kubernetes.io/projected/b6973f5d-d174-4643-814d-e929acd898ba-kube-api-access-j7j8q\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530496 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b69f20b-5284-4c2e-b147-abcc5441c977-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530516 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d73c8661-d51c-4d6e-a981-e186a3fc1964-ready\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.530536 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f231ba-ace1-4242-a8cb-04e2904f95e9-trusted-ca\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.531282 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-config\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.531343 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60dd3fb-226c-4117-a898-4efde2c99ca8-config\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.531504 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4183c4d-f709-4d5b-a9a4-180284f37cc8-serving-cert\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.531808 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/776da53b-d740-421c-a867-43239bb9ebc6-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.532021 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f231ba-ace1-4242-a8cb-04e2904f95e9-trusted-ca\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.532149 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f231ba-ace1-4242-a8cb-04e2904f95e9-config\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.532562 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b69f20b-5284-4c2e-b147-abcc5441c977-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.532749 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5471dfd3-e36e-405a-a517-2c1e2bc10e62-tmp\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.532809 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/15fa3f1d-6230-4602-a46a-1f9b84a147fa-tmp-dir\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.533497 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-oauth-serving-cert\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.533837 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eaf2ae84-8492-41c0-b678-ab302371258a-console-oauth-config\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.534355 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf2ae84-8492-41c0-b678-ab302371258a-trusted-ca-bundle\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.534663 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/776da53b-d740-421c-a867-43239bb9ebc6-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.534678 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94ce1afb-999a-45b5-847a-b3a71aa87c89-metrics-tls\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.536116 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b69f20b-5284-4c2e-b147-abcc5441c977-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.536156 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f231ba-ace1-4242-a8cb-04e2904f95e9-serving-cert\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.536706 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c60dd3fb-226c-4117-a898-4efde2c99ca8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.537198 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf2ae84-8492-41c0-b678-ab302371258a-console-serving-cert\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.543873 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.563521 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.568462 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776da53b-d740-421c-a867-43239bb9ebc6-config\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.583464 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.624306 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.631503 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.631602 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631699 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631729 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631762 5116 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631764 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631786 5116 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631802 5116 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631880 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.631844731 +0000 UTC m=+114.428967975 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.631926 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.631906432 +0000 UTC m=+114.429029666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.631975 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.632072 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.632221 5116 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.632252 5116 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.632307 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.632294722 +0000 UTC m=+114.429417956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.632329 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.632321093 +0000 UTC m=+114.429444327 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.643019 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.664363 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.671828 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15fa3f1d-6230-4602-a46a-1f9b84a147fa-serving-cert\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.679031 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.679054 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.679295 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.684364 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.689190 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15fa3f1d-6230-4602-a46a-1f9b84a147fa-config\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.713571 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.718703 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.734913 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.744070 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.764123 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.775901 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.783751 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.803569 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.812876 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3dc86ab-217d-4d86-8381-16465ee204c8-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.824067 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.834934 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.835116 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.835083657 +0000 UTC m=+114.632206891 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.835202 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.835362 5116 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: E1208 17:43:48.835446 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs podName:19151390-7d67-4ae9-8520-ae20b8eb46f8 nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.835430586 +0000 UTC m=+114.632553860 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs") pod "network-metrics-daemon-5ft89" (UID: "19151390-7d67-4ae9-8520-ae20b8eb46f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.843842 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.869678 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.870845 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6973f5d-d174-4643-814d-e929acd898ba-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.883649 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.903685 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.911824 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6973f5d-d174-4643-814d-e929acd898ba-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.923374 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.944714 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.949929 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d73c8661-d51c-4d6e-a981-e186a3fc1964-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:48 crc kubenswrapper[5116]: I1208 17:43:48.983900 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.003490 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.024732 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.050808 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.063551 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.083915 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.120461 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.125047 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.143815 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.163436 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.183332 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.203703 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.224283 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.243603 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.265193 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.284600 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.305988 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.323677 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.345691 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.363861 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.384608 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.418934 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb62p\" (UniqueName: \"kubernetes.io/projected/79b65775-2e2c-4bad-bf4b-b8c4893e6463-kube-api-access-xb62p\") pod \"cluster-samples-operator-6b564684c8-6rbxq\" (UID: \"79b65775-2e2c-4bad-bf4b-b8c4893e6463\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.422419 5116 request.go:752] "Waited before sending request" delay="1.008789128s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.440089 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dqw\" (UniqueName: \"kubernetes.io/projected/0267ec2e-8f60-4739-aae7-2a133c6f2809-kube-api-access-z6dqw\") pod \"apiserver-8596bd845d-nlzmf\" (UID: \"0267ec2e-8f60-4739-aae7-2a133c6f2809\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.456811 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gqcp\" (UniqueName: \"kubernetes.io/projected/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-kube-api-access-9gqcp\") pod \"oauth-openshift-66458b6674-qnwj9\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.478612 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4x4h\" (UniqueName: \"kubernetes.io/projected/4f7ef3d6-0bc3-4566-8735-c4a2389d4c84-kube-api-access-x4x4h\") pod \"apiserver-9ddfb9f55-b2n2w\" (UID: \"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.502803 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v78rk\" (UniqueName: \"kubernetes.io/projected/71d475ea-b97a-489a-8c80-1a30614dccb5-kube-api-access-v78rk\") pod \"openshift-config-operator-5777786469-86sn8\" (UID: \"71d475ea-b97a-489a-8c80-1a30614dccb5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.521570 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5zlg\" (UniqueName: \"kubernetes.io/projected/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-kube-api-access-f5zlg\") pod \"controller-manager-65b6cccf98-99grc\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.539002 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnfj\" (UniqueName: \"kubernetes.io/projected/3a30015c-60d9-4474-8417-731fd67ea187-kube-api-access-svnfj\") pod \"machine-api-operator-755bb95488-dx5gf\" (UID: \"3a30015c-60d9-4474-8417-731fd67ea187\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.539160 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.556366 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.559969 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ql7\" (UniqueName: \"kubernetes.io/projected/e71f0431-13f2-46f2-8673-f26a5c9d0cf6-kube-api-access-b8ql7\") pod \"machine-approver-54c688565-8h884\" (UID: \"e71f0431-13f2-46f2-8673-f26a5c9d0cf6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.577590 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.578111 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4xv8\" (UniqueName: \"kubernetes.io/projected/a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc-kube-api-access-d4xv8\") pod \"authentication-operator-7f5c659b84-x825q\" (UID: \"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.582796 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.602786 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.619449 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.624681 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.643758 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.649181 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.664265 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.666409 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.679307 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.725800 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.726105 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.726458 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.732159 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.744341 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.794020 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.796080 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.796347 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.806219 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.876212 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.880575 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.880850 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.881292 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.888172 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.907533 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.935853 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.945115 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.964557 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:43:49 crc kubenswrapper[5116]: I1208 17:43:49.993549 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.004824 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.026097 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.044973 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.069261 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.097714 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.103882 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.144272 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.182691 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.183156 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.183462 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.294799 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.294824 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.295142 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.295452 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.295713 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.303496 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.328227 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.343967 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.367594 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.387786 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.403432 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.427779 5116 request.go:752] "Waited before sending request" delay="1.900322153s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.431325 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" event={"ID":"e71f0431-13f2-46f2-8673-f26a5c9d0cf6","Type":"ContainerStarted","Data":"70e50b2bbed24b4c3089a4a1b55360f9befe0d15dd7ae89a437ed3e973d53861"} Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.437575 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dx5gf"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.437636 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.438790 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-86sn8"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.454891 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fskx6\" (UniqueName: \"kubernetes.io/projected/eaf2ae84-8492-41c0-b678-ab302371258a-kube-api-access-fskx6\") pod \"console-64d44f6ddf-l4b2c\" (UID: \"eaf2ae84-8492-41c0-b678-ab302371258a\") " pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.477707 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwsr\" (UniqueName: \"kubernetes.io/projected/94ce1afb-999a-45b5-847a-b3a71aa87c89-kube-api-access-lpwsr\") pod \"dns-operator-799b87ffcd-wf75r\" (UID: \"94ce1afb-999a-45b5-847a-b3a71aa87c89\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:50 crc kubenswrapper[5116]: W1208 17:43:50.489170 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71d475ea_b97a_489a_8c80_1a30614dccb5.slice/crio-b83f0d4005f4874d93ec6d3aea11a3b51e9f3115f442fac3b4e7132d96ab1f15 WatchSource:0}: Error finding container b83f0d4005f4874d93ec6d3aea11a3b51e9f3115f442fac3b4e7132d96ab1f15: Status 404 returned error can't find the container with id b83f0d4005f4874d93ec6d3aea11a3b51e9f3115f442fac3b4e7132d96ab1f15 Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.490159 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjlh\" (UniqueName: \"kubernetes.io/projected/d4f231ba-ace1-4242-a8cb-04e2904f95e9-kube-api-access-hrjlh\") pod \"console-operator-67c89758df-wbzsx\" (UID: \"d4f231ba-ace1-4242-a8cb-04e2904f95e9\") " pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.495189 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-b2n2w"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.502464 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.503529 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.512822 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfsz6\" (UniqueName: \"kubernetes.io/projected/5b69f20b-5284-4c2e-b147-abcc5441c977-kube-api-access-hfsz6\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.527848 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8hkm\" (UniqueName: \"kubernetes.io/projected/d73c8661-d51c-4d6e-a981-e186a3fc1964-kube-api-access-d8hkm\") pod \"cni-sysctl-allowlist-ds-qsgps\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.528253 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-x825q"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.554623 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qnwj9"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.562005 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b69f20b-5284-4c2e-b147-abcc5441c977-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-llz2b\" (UID: \"5b69f20b-5284-4c2e-b147-abcc5441c977\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:50 crc kubenswrapper[5116]: W1208 17:43:50.568681 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1c39b6e_d778_4a0e_aa34_5a4765f9c4fc.slice/crio-168404adeab5f591496b9c3892ccaa6722d66aa51907157fc67ca096b0a532af WatchSource:0}: Error finding container 168404adeab5f591496b9c3892ccaa6722d66aa51907157fc67ca096b0a532af: Status 404 returned error can't find the container with id 168404adeab5f591496b9c3892ccaa6722d66aa51907157fc67ca096b0a532af Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.569123 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg2t7\" (UniqueName: \"kubernetes.io/projected/776da53b-d740-421c-a867-43239bb9ebc6-kube-api-access-cg2t7\") pod \"openshift-controller-manager-operator-686468bdd5-zm79w\" (UID: \"776da53b-d740-421c-a867-43239bb9ebc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:50 crc kubenswrapper[5116]: W1208 17:43:50.574224 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8dbe374_c944_4fd4_bb80_8dc26c3e5d24.slice/crio-38830c802e5168f342dea310cd88c936084147f269257e40cd97bc3be840aa81 WatchSource:0}: Error finding container 38830c802e5168f342dea310cd88c936084147f269257e40cd97bc3be840aa81: Status 404 returned error can't find the container with id 38830c802e5168f342dea310cd88c936084147f269257e40cd97bc3be840aa81 Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.575741 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.580048 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6973f5d-d174-4643-814d-e929acd898ba-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.600911 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.610551 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhpkw\" (UniqueName: \"kubernetes.io/projected/a4183c4d-f709-4d5b-a9a4-180284f37cc8-kube-api-access-bhpkw\") pod \"route-controller-manager-776cdc94d6-mv9qd\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.611035 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.624823 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhzmj\" (UniqueName: \"kubernetes.io/projected/c60dd3fb-226c-4117-a898-4efde2c99ca8-kube-api-access-nhzmj\") pod \"openshift-apiserver-operator-846cbfc458-hjdgl\" (UID: \"c60dd3fb-226c-4117-a898-4efde2c99ca8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.642167 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp6nq\" (UniqueName: \"kubernetes.io/projected/d3dc86ab-217d-4d86-8381-16465ee204c8-kube-api-access-dp6nq\") pod \"machine-config-controller-f9cdd68f7-pp68b\" (UID: \"d3dc86ab-217d-4d86-8381-16465ee204c8\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.655838 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.668795 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltrx4\" (UniqueName: \"kubernetes.io/projected/5471dfd3-e36e-405a-a517-2c1e2bc10e62-kube-api-access-ltrx4\") pod \"marketplace-operator-547dbd544d-w2582\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.682655 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.692503 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.695817 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7j8q\" (UniqueName: \"kubernetes.io/projected/b6973f5d-d174-4643-814d-e929acd898ba-kube-api-access-j7j8q\") pod \"ingress-operator-6b9cb4dbcf-zn9l4\" (UID: \"b6973f5d-d174-4643-814d-e929acd898ba\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.698654 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.700721 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15fa3f1d-6230-4602-a46a-1f9b84a147fa-kube-api-access\") pod \"kube-apiserver-operator-575994946d-57tcr\" (UID: \"15fa3f1d-6230-4602-a46a-1f9b84a147fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.707212 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.731838 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxsxs\" (UniqueName: \"kubernetes.io/projected/e71c8014-5266-4483-8037-e8d9e7995c1b-kube-api-access-rxsxs\") pod \"downloads-747b44746d-4msk8\" (UID: \"e71c8014-5266-4483-8037-e8d9e7995c1b\") " pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.736018 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-99grc"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.744815 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.764473 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.783551 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.793522 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-l4b2c"] Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.795302 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.805826 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.843976 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.868751 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.978989 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.979104 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.979748 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.980074 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb4d104-2982-4fda-904d-860b430ccc30-srv-cert\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.980625 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-config\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.981054 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-client\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.981155 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae32fe26-12f4-4893-b748-d39bf6908a5f-tmp-dir\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.981183 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclbv\" (UniqueName: \"kubernetes.io/projected/5c183f4a-f4d4-4584-916d-1055aa64de78-kube-api-access-kclbv\") pod \"migrator-866fcbc849-gk2ww\" (UID: \"5c183f4a-f4d4-4584-916d-1055aa64de78\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.981232 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtvl\" (UniqueName: \"kubernetes.io/projected/b1d60def-1a02-4598-801e-fc4fdfaabcf4-kube-api-access-pxtvl\") pod \"package-server-manager-77f986bd66-hzpcx\" (UID: \"b1d60def-1a02-4598-801e-fc4fdfaabcf4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.981863 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxlkq\" (UniqueName: \"kubernetes.io/projected/2a5179cd-4e4d-401c-af27-a30fc32e5146-kube-api-access-qxlkq\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.981908 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkt84\" (UniqueName: \"kubernetes.io/projected/cc5709c8-d943-4d2c-bd51-bdf689fe3714-kube-api-access-rkt84\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.982394 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc5709c8-d943-4d2c-bd51-bdf689fe3714-srv-cert\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.982863 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-bound-sa-token\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983074 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983366 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64540db1-3951-4714-a14a-542d29e00e3c-serving-cert\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983404 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4k28\" (UniqueName: \"kubernetes.io/projected/dc8a8f38-928e-445a-b2d0-56c91cff7483-kube-api-access-h4k28\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983436 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-trusted-ca\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983531 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d60def-1a02-4598-801e-fc4fdfaabcf4-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-hzpcx\" (UID: \"b1d60def-1a02-4598-801e-fc4fdfaabcf4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983555 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwdl\" (UniqueName: \"kubernetes.io/projected/b7623dbe-7780-43e2-8225-9cf0b9e83951-kube-api-access-sgwdl\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983591 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983605 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-ca-trust-extracted\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983659 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-signing-key\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.983798 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj7lq\" (UniqueName: \"kubernetes.io/projected/fdb4d104-2982-4fda-904d-860b430ccc30-kube-api-access-dj7lq\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.984512 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a5179cd-4e4d-401c-af27-a30fc32e5146-images\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.984607 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfsdr\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-kube-api-access-wfsdr\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.984699 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-installation-pull-secrets\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.984748 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-signing-cabundle\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.984887 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb4d104-2982-4fda-904d-860b430ccc30-profile-collector-cert\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:50 crc kubenswrapper[5116]: E1208 17:43:50.984994 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:51.484974203 +0000 UTC m=+101.282097437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.986909 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a5179cd-4e4d-401c-af27-a30fc32e5146-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.987072 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7623dbe-7780-43e2-8225-9cf0b9e83951-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.993616 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a5179cd-4e4d-401c-af27-a30fc32e5146-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.993790 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fdb4d104-2982-4fda-904d-860b430ccc30-tmpfs\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.993876 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d63661bb-2db9-4a87-ae13-21a004bf32a9-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.993997 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8a8f38-928e-445a-b2d0-56c91cff7483-config-volume\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994076 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdjm\" (UniqueName: \"kubernetes.io/projected/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-kube-api-access-zrdjm\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994199 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63661bb-2db9-4a87-ae13-21a004bf32a9-config\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994329 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-tls\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994427 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7623dbe-7780-43e2-8225-9cf0b9e83951-config\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994498 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d63661bb-2db9-4a87-ae13-21a004bf32a9-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994578 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63661bb-2db9-4a87-ae13-21a004bf32a9-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994648 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64540db1-3951-4714-a14a-542d29e00e3c-config\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994734 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-certificates\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994850 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cc5709c8-d943-4d2c-bd51-bdf689fe3714-tmpfs\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.994921 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8a8f38-928e-445a-b2d0-56c91cff7483-secret-volume\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.995164 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-ca\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.995186 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmf4\" (UniqueName: \"kubernetes.io/projected/64540db1-3951-4714-a14a-542d29e00e3c-kube-api-access-8tmf4\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.995228 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae32fe26-12f4-4893-b748-d39bf6908a5f-serving-cert\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.995440 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvc8q\" (UniqueName: \"kubernetes.io/projected/ae32fe26-12f4-4893-b748-d39bf6908a5f-kube-api-access-fvc8q\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:50 crc kubenswrapper[5116]: I1208 17:43:50.995499 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc5709c8-d943-4d2c-bd51-bdf689fe3714-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.009685 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-wbzsx"] Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.112892 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.115268 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:51.615219807 +0000 UTC m=+101.412343041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.120018 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb4d104-2982-4fda-904d-860b430ccc30-srv-cert\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.120224 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-stats-auth\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.120392 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d2b0fc4-9619-4e70-92a9-06896ea298f4-metrics-tls\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.120525 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-config\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.120665 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-client\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.120813 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae32fe26-12f4-4893-b748-d39bf6908a5f-tmp-dir\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.120936 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kclbv\" (UniqueName: \"kubernetes.io/projected/5c183f4a-f4d4-4584-916d-1055aa64de78-kube-api-access-kclbv\") pod \"migrator-866fcbc849-gk2ww\" (UID: \"5c183f4a-f4d4-4584-916d-1055aa64de78\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.121057 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtvl\" (UniqueName: \"kubernetes.io/projected/b1d60def-1a02-4598-801e-fc4fdfaabcf4-kube-api-access-pxtvl\") pod \"package-server-manager-77f986bd66-hzpcx\" (UID: \"b1d60def-1a02-4598-801e-fc4fdfaabcf4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.121174 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/88c9e622-e4c0-49b0-a481-fb6e32fc0505-webhook-certs\") pod \"multus-admission-controller-69db94689b-9bt88\" (UID: \"88c9e622-e4c0-49b0-a481-fb6e32fc0505\") " pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.121602 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5016d861-0431-4e4a-bbe3-c7032eb529c7-apiservice-cert\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.121741 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a34224e8-837b-4261-8b6d-3e9273996375-cert\") pod \"ingress-canary-c642w\" (UID: \"a34224e8-837b-4261-8b6d-3e9273996375\") " pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.121879 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzssk\" (UniqueName: \"kubernetes.io/projected/a34224e8-837b-4261-8b6d-3e9273996375-kube-api-access-kzssk\") pod \"ingress-canary-c642w\" (UID: \"a34224e8-837b-4261-8b6d-3e9273996375\") " pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.122015 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxlkq\" (UniqueName: \"kubernetes.io/projected/2a5179cd-4e4d-401c-af27-a30fc32e5146-kube-api-access-qxlkq\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.122138 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rkt84\" (UniqueName: \"kubernetes.io/projected/cc5709c8-d943-4d2c-bd51-bdf689fe3714-kube-api-access-rkt84\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.122511 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljhv\" (UniqueName: \"kubernetes.io/projected/2d2b0fc4-9619-4e70-92a9-06896ea298f4-kube-api-access-cljhv\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.122720 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc5709c8-d943-4d2c-bd51-bdf689fe3714-srv-cert\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.122845 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62n9l\" (UniqueName: \"kubernetes.io/projected/56b4bc0e-8c13-439e-9293-f70b35418ce0-kube-api-access-62n9l\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.122976 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-default-certificate\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123085 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-config\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123186 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5016d861-0431-4e4a-bbe3-c7032eb529c7-tmpfs\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123334 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-bound-sa-token\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123473 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-socket-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123550 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae32fe26-12f4-4893-b748-d39bf6908a5f-tmp-dir\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123665 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123783 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64540db1-3951-4714-a14a-542d29e00e3c-serving-cert\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.123937 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4k28\" (UniqueName: \"kubernetes.io/projected/dc8a8f38-928e-445a-b2d0-56c91cff7483-kube-api-access-h4k28\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.124047 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d2b0fc4-9619-4e70-92a9-06896ea298f4-tmp-dir\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.124151 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.124272 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.122619 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-config\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.130103 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:51.630081084 +0000 UTC m=+101.427204318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.131305 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-metrics-certs\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.131366 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-trusted-ca\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.131595 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslv6\" (UniqueName: \"kubernetes.io/projected/9663d487-af7a-4d15-beb7-13122df40ab4-kube-api-access-jslv6\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.131703 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qn2c\" (UniqueName: \"kubernetes.io/projected/88c9e622-e4c0-49b0-a481-fb6e32fc0505-kube-api-access-2qn2c\") pod \"multus-admission-controller-69db94689b-9bt88\" (UID: \"88c9e622-e4c0-49b0-a481-fb6e32fc0505\") " pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.131825 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-mountpoint-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.131954 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-csi-data-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.132030 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d60def-1a02-4598-801e-fc4fdfaabcf4-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-hzpcx\" (UID: \"b1d60def-1a02-4598-801e-fc4fdfaabcf4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.132105 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwdl\" (UniqueName: \"kubernetes.io/projected/b7623dbe-7780-43e2-8225-9cf0b9e83951-kube-api-access-sgwdl\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.141128 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.132629 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.210541 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-ca-trust-extracted\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.211582 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-ca-trust-extracted\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.212498 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-signing-key\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.212565 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9663d487-af7a-4d15-beb7-13122df40ab4-node-bootstrap-token\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.212629 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.221967 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-trusted-ca\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.222192 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dj7lq\" (UniqueName: \"kubernetes.io/projected/fdb4d104-2982-4fda-904d-860b430ccc30-kube-api-access-dj7lq\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.223730 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a5179cd-4e4d-401c-af27-a30fc32e5146-images\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.223855 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wfsdr\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-kube-api-access-wfsdr\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.223899 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5016d861-0431-4e4a-bbe3-c7032eb529c7-webhook-cert\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.223952 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-installation-pull-secrets\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.223996 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9141db3-856a-4cae-ad18-f0eb4a53c8f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jcz8f\" (UID: \"b9141db3-856a-4cae-ad18-f0eb4a53c8f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224046 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-signing-cabundle\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224101 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb4d104-2982-4fda-904d-860b430ccc30-profile-collector-cert\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224171 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a5179cd-4e4d-401c-af27-a30fc32e5146-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224313 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7623dbe-7780-43e2-8225-9cf0b9e83951-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224388 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a5179cd-4e4d-401c-af27-a30fc32e5146-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224425 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fdb4d104-2982-4fda-904d-860b430ccc30-tmpfs\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224460 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d63661bb-2db9-4a87-ae13-21a004bf32a9-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224512 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2b0fc4-9619-4e70-92a9-06896ea298f4-config-volume\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224552 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8a8f38-928e-445a-b2d0-56c91cff7483-config-volume\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224593 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdjm\" (UniqueName: \"kubernetes.io/projected/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-kube-api-access-zrdjm\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224627 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrr9\" (UniqueName: \"kubernetes.io/projected/5016d861-0431-4e4a-bbe3-c7032eb529c7-kube-api-access-fxrr9\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224654 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63661bb-2db9-4a87-ae13-21a004bf32a9-config\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224717 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-tls\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224762 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7623dbe-7780-43e2-8225-9cf0b9e83951-config\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224801 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d63661bb-2db9-4a87-ae13-21a004bf32a9-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224828 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63661bb-2db9-4a87-ae13-21a004bf32a9-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224850 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64540db1-3951-4714-a14a-542d29e00e3c-config\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224875 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-certificates\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224930 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cc5709c8-d943-4d2c-bd51-bdf689fe3714-tmpfs\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.224962 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p679p\" (UniqueName: \"kubernetes.io/projected/0deef197-8a46-46ea-a786-7e9518318396-kube-api-access-p679p\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.225001 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8a8f38-928e-445a-b2d0-56c91cff7483-secret-volume\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.225029 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0deef197-8a46-46ea-a786-7e9518318396-service-ca-bundle\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.225085 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-ca\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.227969 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8a8f38-928e-445a-b2d0-56c91cff7483-config-volume\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.228581 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64540db1-3951-4714-a14a-542d29e00e3c-serving-cert\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.228722 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a5179cd-4e4d-401c-af27-a30fc32e5146-images\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.229258 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-client\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.230832 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-signing-key\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.231377 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmf4\" (UniqueName: \"kubernetes.io/projected/64540db1-3951-4714-a14a-542d29e00e3c-kube-api-access-8tmf4\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.231615 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkt84\" (UniqueName: \"kubernetes.io/projected/cc5709c8-d943-4d2c-bd51-bdf689fe3714-kube-api-access-rkt84\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.231982 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclbv\" (UniqueName: \"kubernetes.io/projected/5c183f4a-f4d4-4584-916d-1055aa64de78-kube-api-access-kclbv\") pod \"migrator-866fcbc849-gk2ww\" (UID: \"5c183f4a-f4d4-4584-916d-1055aa64de78\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.232474 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cc5709c8-d943-4d2c-bd51-bdf689fe3714-tmpfs\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.233325 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a5179cd-4e4d-401c-af27-a30fc32e5146-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.234549 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-certificates\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.235527 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64540db1-3951-4714-a14a-542d29e00e3c-config\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.236047 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7623dbe-7780-43e2-8225-9cf0b9e83951-config\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.238908 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxlkq\" (UniqueName: \"kubernetes.io/projected/2a5179cd-4e4d-401c-af27-a30fc32e5146-kube-api-access-qxlkq\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.239338 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae32fe26-12f4-4893-b748-d39bf6908a5f-serving-cert\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.239460 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d63661bb-2db9-4a87-ae13-21a004bf32a9-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.239542 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8ll\" (UniqueName: \"kubernetes.io/projected/b9141db3-856a-4cae-ad18-f0eb4a53c8f8-kube-api-access-wv8ll\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jcz8f\" (UID: \"b9141db3-856a-4cae-ad18-f0eb4a53c8f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.239594 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-signing-cabundle\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.239704 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae32fe26-12f4-4893-b748-d39bf6908a5f-etcd-ca\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.240141 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63661bb-2db9-4a87-ae13-21a004bf32a9-config\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.240324 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fdb4d104-2982-4fda-904d-860b430ccc30-tmpfs\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.241268 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fvc8q\" (UniqueName: \"kubernetes.io/projected/ae32fe26-12f4-4893-b748-d39bf6908a5f-kube-api-access-fvc8q\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.241338 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9663d487-af7a-4d15-beb7-13122df40ab4-certs\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.242003 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc5709c8-d943-4d2c-bd51-bdf689fe3714-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.242373 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-registration-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.242468 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-plugins-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.251160 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb4d104-2982-4fda-904d-860b430ccc30-srv-cert\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.251754 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8a8f38-928e-445a-b2d0-56c91cff7483-secret-volume\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.254062 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d60def-1a02-4598-801e-fc4fdfaabcf4-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-hzpcx\" (UID: \"b1d60def-1a02-4598-801e-fc4fdfaabcf4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.254792 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc5709c8-d943-4d2c-bd51-bdf689fe3714-srv-cert\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.262654 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtvl\" (UniqueName: \"kubernetes.io/projected/b1d60def-1a02-4598-801e-fc4fdfaabcf4-kube-api-access-pxtvl\") pod \"package-server-manager-77f986bd66-hzpcx\" (UID: \"b1d60def-1a02-4598-801e-fc4fdfaabcf4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.292097 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7623dbe-7780-43e2-8225-9cf0b9e83951-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.292282 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-bound-sa-token\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.292378 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwdl\" (UniqueName: \"kubernetes.io/projected/b7623dbe-7780-43e2-8225-9cf0b9e83951-kube-api-access-sgwdl\") pod \"kube-storage-version-migrator-operator-565b79b866-sqm4p\" (UID: \"b7623dbe-7780-43e2-8225-9cf0b9e83951\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.294036 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63661bb-2db9-4a87-ae13-21a004bf32a9-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.294628 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4k28\" (UniqueName: \"kubernetes.io/projected/dc8a8f38-928e-445a-b2d0-56c91cff7483-kube-api-access-h4k28\") pod \"collect-profiles-29420250-rrwn6\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.295181 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-installation-pull-secrets\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.299889 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc5709c8-d943-4d2c-bd51-bdf689fe3714-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6dz8n\" (UID: \"cc5709c8-d943-4d2c-bd51-bdf689fe3714\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.300392 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae32fe26-12f4-4893-b748-d39bf6908a5f-serving-cert\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.300546 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb4d104-2982-4fda-904d-860b430ccc30-profile-collector-cert\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.300881 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-tls\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.301450 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj7lq\" (UniqueName: \"kubernetes.io/projected/fdb4d104-2982-4fda-904d-860b430ccc30-kube-api-access-dj7lq\") pod \"olm-operator-5cdf44d969-w5bp9\" (UID: \"fdb4d104-2982-4fda-904d-860b430ccc30\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.301777 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a5179cd-4e4d-401c-af27-a30fc32e5146-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-lnhtb\" (UID: \"2a5179cd-4e4d-401c-af27-a30fc32e5146\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.315521 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfsdr\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-kube-api-access-wfsdr\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.331018 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.340160 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdjm\" (UniqueName: \"kubernetes.io/projected/3c5d3e24-1c23-4245-bf46-d6ba11dfbd51-kube-api-access-zrdjm\") pod \"service-ca-74545575db-g68gs\" (UID: \"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51\") " pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.340589 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344398 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344619 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p679p\" (UniqueName: \"kubernetes.io/projected/0deef197-8a46-46ea-a786-7e9518318396-kube-api-access-p679p\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344681 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0deef197-8a46-46ea-a786-7e9518318396-service-ca-bundle\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344759 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8ll\" (UniqueName: \"kubernetes.io/projected/b9141db3-856a-4cae-ad18-f0eb4a53c8f8-kube-api-access-wv8ll\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jcz8f\" (UID: \"b9141db3-856a-4cae-ad18-f0eb4a53c8f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344792 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9663d487-af7a-4d15-beb7-13122df40ab4-certs\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344845 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-registration-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344867 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-plugins-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344908 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-stats-auth\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344932 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d2b0fc4-9619-4e70-92a9-06896ea298f4-metrics-tls\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.344988 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/88c9e622-e4c0-49b0-a481-fb6e32fc0505-webhook-certs\") pod \"multus-admission-controller-69db94689b-9bt88\" (UID: \"88c9e622-e4c0-49b0-a481-fb6e32fc0505\") " pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345017 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5016d861-0431-4e4a-bbe3-c7032eb529c7-apiservice-cert\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345039 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a34224e8-837b-4261-8b6d-3e9273996375-cert\") pod \"ingress-canary-c642w\" (UID: \"a34224e8-837b-4261-8b6d-3e9273996375\") " pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345085 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzssk\" (UniqueName: \"kubernetes.io/projected/a34224e8-837b-4261-8b6d-3e9273996375-kube-api-access-kzssk\") pod \"ingress-canary-c642w\" (UID: \"a34224e8-837b-4261-8b6d-3e9273996375\") " pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345123 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cljhv\" (UniqueName: \"kubernetes.io/projected/2d2b0fc4-9619-4e70-92a9-06896ea298f4-kube-api-access-cljhv\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.345166 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:51.845135189 +0000 UTC m=+101.642258613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345230 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-62n9l\" (UniqueName: \"kubernetes.io/projected/56b4bc0e-8c13-439e-9293-f70b35418ce0-kube-api-access-62n9l\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345739 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-default-certificate\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345762 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-config\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345785 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5016d861-0431-4e4a-bbe3-c7032eb529c7-tmpfs\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345811 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-socket-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345839 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345863 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d2b0fc4-9619-4e70-92a9-06896ea298f4-tmp-dir\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345882 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345914 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345947 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-metrics-certs\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.345980 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jslv6\" (UniqueName: \"kubernetes.io/projected/9663d487-af7a-4d15-beb7-13122df40ab4-kube-api-access-jslv6\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346002 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2qn2c\" (UniqueName: \"kubernetes.io/projected/88c9e622-e4c0-49b0-a481-fb6e32fc0505-kube-api-access-2qn2c\") pod \"multus-admission-controller-69db94689b-9bt88\" (UID: \"88c9e622-e4c0-49b0-a481-fb6e32fc0505\") " pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346037 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-mountpoint-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346062 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-csi-data-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346107 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9663d487-af7a-4d15-beb7-13122df40ab4-node-bootstrap-token\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346127 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346161 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5016d861-0431-4e4a-bbe3-c7032eb529c7-webhook-cert\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346184 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9141db3-856a-4cae-ad18-f0eb4a53c8f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jcz8f\" (UID: \"b9141db3-856a-4cae-ad18-f0eb4a53c8f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346295 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2b0fc4-9619-4e70-92a9-06896ea298f4-config-volume\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.346332 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrr9\" (UniqueName: \"kubernetes.io/projected/5016d861-0431-4e4a-bbe3-c7032eb529c7-kube-api-access-fxrr9\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.347211 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0deef197-8a46-46ea-a786-7e9518318396-service-ca-bundle\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.348831 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmf4\" (UniqueName: \"kubernetes.io/projected/64540db1-3951-4714-a14a-542d29e00e3c-kube-api-access-8tmf4\") pod \"service-ca-operator-5b9c976747-vcg8j\" (UID: \"64540db1-3951-4714-a14a-542d29e00e3c\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.349606 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-config\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.350095 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5016d861-0431-4e4a-bbe3-c7032eb529c7-tmpfs\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.350397 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-socket-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.350663 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:51.850647112 +0000 UTC m=+101.647770346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.351278 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d2b0fc4-9619-4e70-92a9-06896ea298f4-tmp-dir\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.351595 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.355940 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-mountpoint-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.356297 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-plugins-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.356386 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-registration-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.358503 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/56b4bc0e-8c13-439e-9293-f70b35418ce0-csi-data-dir\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.359481 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9663d487-af7a-4d15-beb7-13122df40ab4-node-bootstrap-token\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.378221 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.378451 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.378972 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2b0fc4-9619-4e70-92a9-06896ea298f4-config-volume\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.379297 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-metrics-certs\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.385307 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.385557 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-default-certificate\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.386015 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d2b0fc4-9619-4e70-92a9-06896ea298f4-metrics-tls\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.390515 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.400571 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0deef197-8a46-46ea-a786-7e9518318396-stats-auth\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.400946 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5016d861-0431-4e4a-bbe3-c7032eb529c7-apiservice-cert\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.402672 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvc8q\" (UniqueName: \"kubernetes.io/projected/ae32fe26-12f4-4893-b748-d39bf6908a5f-kube-api-access-fvc8q\") pod \"etcd-operator-69b85846b6-9xp42\" (UID: \"ae32fe26-12f4-4893-b748-d39bf6908a5f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.402694 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5016d861-0431-4e4a-bbe3-c7032eb529c7-webhook-cert\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.402973 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d63661bb-2db9-4a87-ae13-21a004bf32a9-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-44l68\" (UID: \"d63661bb-2db9-4a87-ae13-21a004bf32a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.403588 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9663d487-af7a-4d15-beb7-13122df40ab4-certs\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.406650 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9141db3-856a-4cae-ad18-f0eb4a53c8f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jcz8f\" (UID: \"b9141db3-856a-4cae-ad18-f0eb4a53c8f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.407690 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.414714 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.419279 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.436361 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" event={"ID":"79b65775-2e2c-4bad-bf4b-b8c4893e6463","Type":"ContainerStarted","Data":"1ed8011d8649d28a5938695d14efe2dde2161caac0b102eacd5d6c3fce7a4f62"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.436765 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" event={"ID":"79b65775-2e2c-4bad-bf4b-b8c4893e6463","Type":"ContainerStarted","Data":"55224adb580746997d61cbe25800234ec03385f79396ce68f778c675af712bf1"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.438446 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" event={"ID":"e71f0431-13f2-46f2-8673-f26a5c9d0cf6","Type":"ContainerStarted","Data":"5e85add8cc4b41ca4d32547f9fd09f58f6a21dadfd608954e2bfa53218e7bbc3"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.439111 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" event={"ID":"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84","Type":"ContainerStarted","Data":"95007f2a7e4c909a064350e0dd007c439e93e93541a9820115a01d10349b20f3"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.439723 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" event={"ID":"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24","Type":"ContainerStarted","Data":"38830c802e5168f342dea310cd88c936084147f269257e40cd97bc3be840aa81"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.441133 5116 generic.go:358] "Generic (PLEG): container finished" podID="71d475ea-b97a-489a-8c80-1a30614dccb5" containerID="3b1997ffb5d1a13c1e3154da41def0a3c96d8c2bc0f86803fab0b4f57a075737" exitCode=0 Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.441201 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" event={"ID":"71d475ea-b97a-489a-8c80-1a30614dccb5","Type":"ContainerDied","Data":"3b1997ffb5d1a13c1e3154da41def0a3c96d8c2bc0f86803fab0b4f57a075737"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.442476 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" event={"ID":"71d475ea-b97a-489a-8c80-1a30614dccb5","Type":"ContainerStarted","Data":"b83f0d4005f4874d93ec6d3aea11a3b51e9f3115f442fac3b4e7132d96ab1f15"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.442488 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.443587 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" event={"ID":"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e","Type":"ContainerStarted","Data":"461e3685cebf09ffd50ea640ce50fd172f8f5b1f055cdad856f359cb7f4bb7e2"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.444581 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" event={"ID":"3a30015c-60d9-4474-8417-731fd67ea187","Type":"ContainerStarted","Data":"5798ee3b0b42d7a67e659cb5fe15e04aabd9ca780b24df672df63a6cb08e5703"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.444610 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" event={"ID":"3a30015c-60d9-4474-8417-731fd67ea187","Type":"ContainerStarted","Data":"588f56cdab85bf527ee0dd56f22e487ec098bce1faf5b7b64095ed0cbb58fbe0"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.445164 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" event={"ID":"d73c8661-d51c-4d6e-a981-e186a3fc1964","Type":"ContainerStarted","Data":"253ca5314a7879fb108fb86e20db89f6820fff3ec17fb312830683327ef57eec"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.445756 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" event={"ID":"d4f231ba-ace1-4242-a8cb-04e2904f95e9","Type":"ContainerStarted","Data":"c5c6231c661342818b0becaf1d49aecf94cd7d411c2ee24f40d1e19f8d0a52a2"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.446395 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-l4b2c" event={"ID":"eaf2ae84-8492-41c0-b678-ab302371258a","Type":"ContainerStarted","Data":"242e57b2711229aec46e4c373dc011b4a69f6e1284d364f1f3132fb4254d2402"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.446847 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.447723 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:51.947704132 +0000 UTC m=+101.744827366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.448206 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" event={"ID":"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc","Type":"ContainerStarted","Data":"8eb75417bdf517b0b6a2d01ea2c83340057a6034f0f65cc2904b3ab9838d802a"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.448232 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" event={"ID":"a1c39b6e-d778-4a0e-aa34-5a4765f9c4fc","Type":"ContainerStarted","Data":"168404adeab5f591496b9c3892ccaa6722d66aa51907157fc67ca096b0a532af"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.449003 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" event={"ID":"0267ec2e-8f60-4739-aae7-2a133c6f2809","Type":"ContainerStarted","Data":"4456034d8c19b20522d6509a31fedb348c272d2058f6617320716a4520448941"} Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.450185 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-g68gs" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.503296 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a34224e8-837b-4261-8b6d-3e9273996375-cert\") pod \"ingress-canary-c642w\" (UID: \"a34224e8-837b-4261-8b6d-3e9273996375\") " pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.504466 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cljhv\" (UniqueName: \"kubernetes.io/projected/2d2b0fc4-9619-4e70-92a9-06896ea298f4-kube-api-access-cljhv\") pod \"dns-default-tlxxd\" (UID: \"2d2b0fc4-9619-4e70-92a9-06896ea298f4\") " pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.504498 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/88c9e622-e4c0-49b0-a481-fb6e32fc0505-webhook-certs\") pod \"multus-admission-controller-69db94689b-9bt88\" (UID: \"88c9e622-e4c0-49b0-a481-fb6e32fc0505\") " pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.506846 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-62n9l\" (UniqueName: \"kubernetes.io/projected/56b4bc0e-8c13-439e-9293-f70b35418ce0-kube-api-access-62n9l\") pod \"csi-hostpathplugin-kdrb8\" (UID: \"56b4bc0e-8c13-439e-9293-f70b35418ce0\") " pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.506894 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8ll\" (UniqueName: \"kubernetes.io/projected/b9141db3-856a-4cae-ad18-f0eb4a53c8f8-kube-api-access-wv8ll\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jcz8f\" (UID: \"b9141db3-856a-4cae-ad18-f0eb4a53c8f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.514654 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p679p\" (UniqueName: \"kubernetes.io/projected/0deef197-8a46-46ea-a786-7e9518318396-kube-api-access-p679p\") pod \"router-default-68cf44c8b8-hv2nc\" (UID: \"0deef197-8a46-46ea-a786-7e9518318396\") " pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.518050 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jslv6\" (UniqueName: \"kubernetes.io/projected/9663d487-af7a-4d15-beb7-13122df40ab4-kube-api-access-jslv6\") pod \"machine-config-server-wp5hf\" (UID: \"9663d487-af7a-4d15-beb7-13122df40ab4\") " pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.518070 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrr9\" (UniqueName: \"kubernetes.io/projected/5016d861-0431-4e4a-bbe3-c7032eb529c7-kube-api-access-fxrr9\") pod \"packageserver-7d4fc7d867-6s8th\" (UID: \"5016d861-0431-4e4a-bbe3-c7032eb529c7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.527924 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.531047 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae931b0-256c-4067-8c1c-a56b5fe1f5f9-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-n9kbg\" (UID: \"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.548591 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qn2c\" (UniqueName: \"kubernetes.io/projected/88c9e622-e4c0-49b0-a481-fb6e32fc0505-kube-api-access-2qn2c\") pod \"multus-admission-controller-69db94689b-9bt88\" (UID: \"88c9e622-e4c0-49b0-a481-fb6e32fc0505\") " pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.554900 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.555648 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.055617834 +0000 UTC m=+101.852741228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.621494 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.654919 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.656857 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.657609 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.157586681 +0000 UTC m=+101.954709915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.672959 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzssk\" (UniqueName: \"kubernetes.io/projected/a34224e8-837b-4261-8b6d-3e9273996375-kube-api-access-kzssk\") pod \"ingress-canary-c642w\" (UID: \"a34224e8-837b-4261-8b6d-3e9273996375\") " pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.672980 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.765632 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.766282 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wp5hf" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.766321 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c642w" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.766510 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.769835 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.770224 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.270205896 +0000 UTC m=+102.067329130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.847801 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.852501 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tlxxd" Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.873211 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.873875 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.373845366 +0000 UTC m=+102.170968600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:51 crc kubenswrapper[5116]: I1208 17:43:51.975583 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:51 crc kubenswrapper[5116]: E1208 17:43:51.976527 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.476497392 +0000 UTC m=+102.273620636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.085061 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.085424 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.58540918 +0000 UTC m=+102.382532404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.120835 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-wf75r"] Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.167867 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w2582"] Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.188870 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.189439 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.689425791 +0000 UTC m=+102.486549025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.424708 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.430060 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.926205551 +0000 UTC m=+102.723328785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.430380 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.430810 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:52.930795041 +0000 UTC m=+102.727918275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.472821 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b"] Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.533265 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.534424 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.0343665 +0000 UTC m=+102.831489734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.605500 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w"] Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.626307 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.627528 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.127510758 +0000 UTC m=+102.924633992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.728817 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.729225 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.229207707 +0000 UTC m=+103.026330941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.831537 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.832008 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.331988486 +0000 UTC m=+103.129111710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:52 crc kubenswrapper[5116]: I1208 17:43:52.947082 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:52 crc kubenswrapper[5116]: E1208 17:43:52.947595 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.447555877 +0000 UTC m=+103.244679111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.061912 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.062422 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.5623964 +0000 UTC m=+103.359519644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.121421 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-x825q" podStartSLOduration=82.121403098 podStartE2EDuration="1m22.121403098s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:53.113928953 +0000 UTC m=+102.911052187" watchObservedRunningTime="2025-12-08 17:43:53.121403098 +0000 UTC m=+102.918526332" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.163704 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.164154 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.664099121 +0000 UTC m=+103.461222365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.164595 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.165295 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.665281832 +0000 UTC m=+103.462405066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.267167 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.267732 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.767648169 +0000 UTC m=+103.564771423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.267921 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.268761 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.768730308 +0000 UTC m=+103.565853542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.375267 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.376193 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.876166327 +0000 UTC m=+103.673289561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.481377 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.481874 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:53.981848542 +0000 UTC m=+103.778971786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.539779 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" event={"ID":"0267ec2e-8f60-4739-aae7-2a133c6f2809","Type":"ContainerDied","Data":"bcc3b6c0d2ce32b50f814285b59d16bda3c8ba39b48be092267a6af70332d6ee"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.540016 5116 generic.go:358] "Generic (PLEG): container finished" podID="0267ec2e-8f60-4739-aae7-2a133c6f2809" containerID="bcc3b6c0d2ce32b50f814285b59d16bda3c8ba39b48be092267a6af70332d6ee" exitCode=0 Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.542380 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" event={"ID":"5471dfd3-e36e-405a-a517-2c1e2bc10e62","Type":"ContainerStarted","Data":"1cd920464aa0e3e0728f4c877ce4bd49e8b47c7c078685c108d1235ba2f1c301"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.549463 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" event={"ID":"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e","Type":"ContainerStarted","Data":"c65524662fef7c7462a7d3c1a59f7c5258906d845739d0cf58822c812f3cdd99"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.560125 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.571538 5116 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-99grc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.571617 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" podUID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.573299 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" event={"ID":"3a30015c-60d9-4474-8417-731fd67ea187","Type":"ContainerStarted","Data":"3dbf440c9d257523d743c63787c5d8cf54da20c1f106d50bf84de1c1cbc86894"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.579626 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" event={"ID":"776da53b-d740-421c-a867-43239bb9ebc6","Type":"ContainerStarted","Data":"f31cdb5aee5d7cf5cc2602ef4b92f993fa6e59737c0b6a46717cc69ffefcac29"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.582323 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.582543 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.082516995 +0000 UTC m=+103.879640229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.582718 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.582986 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-l4b2c" event={"ID":"eaf2ae84-8492-41c0-b678-ab302371258a","Type":"ContainerStarted","Data":"922f4eb0d698748b9d66b913660390bc9ab49e7448cbcbebba9a44d5b26e6f6b"} Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.583133 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.083122561 +0000 UTC m=+103.880245795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.586601 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" event={"ID":"0deef197-8a46-46ea-a786-7e9518318396","Type":"ContainerStarted","Data":"b645f5c3e9c78ae51323ce0f702318626c824d1762b4c1d766d89c45d80bcc65"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.603030 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wp5hf" event={"ID":"9663d487-af7a-4d15-beb7-13122df40ab4","Type":"ContainerStarted","Data":"15ff0341e75bd51897f8700c08abbf1c485cde7a00cf22f9b4a750f9af564b3c"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.603761 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" podStartSLOduration=82.603732478 podStartE2EDuration="1m22.603732478s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:53.603628636 +0000 UTC m=+103.400751870" watchObservedRunningTime="2025-12-08 17:43:53.603732478 +0000 UTC m=+103.400855712" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.611066 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" event={"ID":"94ce1afb-999a-45b5-847a-b3a71aa87c89","Type":"ContainerStarted","Data":"a05d23a719f7e8c538ef9891f9c4e734961bfac9d4808ca35c9e19a5ae01e512"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.613416 5116 generic.go:358] "Generic (PLEG): container finished" podID="4f7ef3d6-0bc3-4566-8735-c4a2389d4c84" containerID="1dbd474b31446e535ab5293868d6c1f6f26b119eae6482506126cd7109dd56fc" exitCode=0 Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.613545 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" event={"ID":"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84","Type":"ContainerDied","Data":"1dbd474b31446e535ab5293868d6c1f6f26b119eae6482506126cd7109dd56fc"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.625185 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" event={"ID":"5b69f20b-5284-4c2e-b147-abcc5441c977","Type":"ContainerStarted","Data":"a62d6ce64ee4f6be5579999127367ad6d01089a911fbc634357a13b81aeda88c"} Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.634082 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" podStartSLOduration=82.634063889 podStartE2EDuration="1m22.634063889s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:53.632391065 +0000 UTC m=+103.429514299" watchObservedRunningTime="2025-12-08 17:43:53.634063889 +0000 UTC m=+103.431187123" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.660655 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-dx5gf" podStartSLOduration=81.66062908 podStartE2EDuration="1m21.66062908s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:53.655489207 +0000 UTC m=+103.452612461" watchObservedRunningTime="2025-12-08 17:43:53.66062908 +0000 UTC m=+103.457752314" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.685088 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.688101 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.188065076 +0000 UTC m=+103.985188310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.780159 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-l4b2c" podStartSLOduration=82.780133895 podStartE2EDuration="1m22.780133895s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:53.699890714 +0000 UTC m=+103.497013948" watchObservedRunningTime="2025-12-08 17:43:53.780133895 +0000 UTC m=+103.577257129" Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.787867 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.788272 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.288257007 +0000 UTC m=+104.085380231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:53 crc kubenswrapper[5116]: I1208 17:43:53.889326 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:53 crc kubenswrapper[5116]: E1208 17:43:53.889915 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.389894225 +0000 UTC m=+104.187017459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.050461 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.051381 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.551363724 +0000 UTC m=+104.348486958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.151192 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.152157 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.65213276 +0000 UTC m=+104.449256004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.253554 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.254061 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.754045345 +0000 UTC m=+104.551168579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.356945 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.357145 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.857120121 +0000 UTC m=+104.654243355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.357563 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.358109 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.858101517 +0000 UTC m=+104.655224741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.463946 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.464568 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.964538861 +0000 UTC m=+104.761662095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.464815 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.465328 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:54.965320271 +0000 UTC m=+104.762443505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.566169 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.566791 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.066770885 +0000 UTC m=+104.863894119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.667074 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" event={"ID":"5471dfd3-e36e-405a-a517-2c1e2bc10e62","Type":"ContainerStarted","Data":"0d7964f58f360bfe2dc6d0c956acb14a6d893157d00764fca012749e5c5dd7ba"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.668034 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.668412 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.168399593 +0000 UTC m=+104.965522827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.693986 5116 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-w2582 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.694054 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" podUID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.725984 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" podStartSLOduration=82.725969263 podStartE2EDuration="1m22.725969263s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:54.724424243 +0000 UTC m=+104.521547487" watchObservedRunningTime="2025-12-08 17:43:54.725969263 +0000 UTC m=+104.523092487" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.768947 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.774565 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.274539619 +0000 UTC m=+105.071662853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.798791 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" podStartSLOduration=83.79876148 podStartE2EDuration="1m23.79876148s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:54.762497705 +0000 UTC m=+104.559620939" watchObservedRunningTime="2025-12-08 17:43:54.79876148 +0000 UTC m=+104.595884714" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.821949 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" podStartSLOduration=83.821930764 podStartE2EDuration="1m23.821930764s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:54.820670391 +0000 UTC m=+104.617793625" watchObservedRunningTime="2025-12-08 17:43:54.821930764 +0000 UTC m=+104.619053998" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.857774 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" podStartSLOduration=6.857749847 podStartE2EDuration="6.857749847s" podCreationTimestamp="2025-12-08 17:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:54.85554521 +0000 UTC m=+104.652668444" watchObservedRunningTime="2025-12-08 17:43:54.857749847 +0000 UTC m=+104.654873081" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.864768 5116 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-qnwj9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.864829 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" podUID="b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.879164 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wp5hf" podStartSLOduration=6.879149535 podStartE2EDuration="6.879149535s" podCreationTimestamp="2025-12-08 17:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:54.878865438 +0000 UTC m=+104.675988682" watchObservedRunningTime="2025-12-08 17:43:54.879149535 +0000 UTC m=+104.676272769" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.879884 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.882805 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.38279027 +0000 UTC m=+105.179913504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886362 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886439 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8h884" event={"ID":"e71f0431-13f2-46f2-8673-f26a5c9d0cf6","Type":"ContainerStarted","Data":"9ab7ea87a93c74928f5268d6d4740d008f65619a6abfdb4a3015e7acf67bd64e"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886508 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zm79w" event={"ID":"776da53b-d740-421c-a867-43239bb9ebc6","Type":"ContainerStarted","Data":"7b8967773794b94a91d1d153fae35e487592bb461391c4e0fb66aa7f13701983"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886532 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886548 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6rbxq" event={"ID":"79b65775-2e2c-4bad-bf4b-b8c4893e6463","Type":"ContainerStarted","Data":"068eefba4223ca21a869dbea0a71735058496ff11ba980169747ea326ec37133"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886590 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" event={"ID":"d73c8661-d51c-4d6e-a981-e186a3fc1964","Type":"ContainerStarted","Data":"7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886605 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886615 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" event={"ID":"0deef197-8a46-46ea-a786-7e9518318396","Type":"ContainerStarted","Data":"8194e2e302f5f4bf0712bdcc14bf77cf868d0115e0a9505e2b4c8a8eca375cda"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886628 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886640 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wp5hf" event={"ID":"9663d487-af7a-4d15-beb7-13122df40ab4","Type":"ContainerStarted","Data":"531156ff23dbab425c1789014c50f33271f05d1663db008f77a6c529c371125d"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886668 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" event={"ID":"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24","Type":"ContainerStarted","Data":"504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886679 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" event={"ID":"94ce1afb-999a-45b5-847a-b3a71aa87c89","Type":"ContainerStarted","Data":"050b054267f8f6338f73f6013926e5f014cb3c69903f4808ba92a5f742ee2b6a"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886690 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" event={"ID":"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84","Type":"ContainerStarted","Data":"8d7d29ff8b73ac5eab3a0a7a253e16a83fdb94cc2075a942b3fee912340a47be"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.886702 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" event={"ID":"71d475ea-b97a-489a-8c80-1a30614dccb5","Type":"ContainerStarted","Data":"9773543462ad1a5ce3fa24a5813166ae3cc2ae9b75a3297ce8a64deed2655797"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.917872 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podStartSLOduration=82.917844613 podStartE2EDuration="1m22.917844613s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:54.915681927 +0000 UTC m=+104.712805181" watchObservedRunningTime="2025-12-08 17:43:54.917844613 +0000 UTC m=+104.714967837" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.930743 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" event={"ID":"5b69f20b-5284-4c2e-b147-abcc5441c977","Type":"ContainerStarted","Data":"a76989fdf63f8cfab0f55e978b0a555306d147cd0e6093207cda6ee2aa1678a3"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.959951 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" event={"ID":"d4f231ba-ace1-4242-a8cb-04e2904f95e9","Type":"ContainerStarted","Data":"a6f77ff8f09db8d081d289467c11d421b7fdbfda7d34143125ff73b2ef38d827"} Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.961380 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.966441 5116 patch_prober.go:28] interesting pod/console-operator-67c89758df-wbzsx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.966550 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" podUID="d4f231ba-ace1-4242-a8cb-04e2904f95e9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.986584 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:54 crc kubenswrapper[5116]: E1208 17:43:54.988566 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.488527296 +0000 UTC m=+105.285650530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.993616 5116 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-99grc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 08 17:43:54 crc kubenswrapper[5116]: I1208 17:43:54.993684 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" podUID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.003643 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" podStartSLOduration=84.003622079 podStartE2EDuration="1m24.003622079s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:54.969766316 +0000 UTC m=+104.766889560" watchObservedRunningTime="2025-12-08 17:43:55.003622079 +0000 UTC m=+104.800745313" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.043039 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" podStartSLOduration=84.043019036 podStartE2EDuration="1m24.043019036s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:55.042221745 +0000 UTC m=+104.839344989" watchObservedRunningTime="2025-12-08 17:43:55.043019036 +0000 UTC m=+104.840142270" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.043838 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" podStartSLOduration=84.043826396 podStartE2EDuration="1m24.043826396s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:55.002409608 +0000 UTC m=+104.799532862" watchObservedRunningTime="2025-12-08 17:43:55.043826396 +0000 UTC m=+104.840949630" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.064059 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-llz2b" podStartSLOduration=84.064045584 podStartE2EDuration="1m24.064045584s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:55.062032311 +0000 UTC m=+104.859155545" watchObservedRunningTime="2025-12-08 17:43:55.064045584 +0000 UTC m=+104.861168818" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.089552 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:55 crc kubenswrapper[5116]: E1208 17:43:55.090441 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.590413521 +0000 UTC m=+105.387536755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.096625 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" podStartSLOduration=83.096604042 podStartE2EDuration="1m23.096604042s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:55.089188069 +0000 UTC m=+104.886311303" watchObservedRunningTime="2025-12-08 17:43:55.096604042 +0000 UTC m=+104.893727276" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.203381 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:55 crc kubenswrapper[5116]: E1208 17:43:55.205339 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.705292135 +0000 UTC m=+105.502415369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.313536 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:55 crc kubenswrapper[5116]: E1208 17:43:55.314077 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.814055829 +0000 UTC m=+105.611179063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.472816 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:55 crc kubenswrapper[5116]: E1208 17:43:55.473032 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.972988421 +0000 UTC m=+105.770111805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.473674 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:55 crc kubenswrapper[5116]: E1208 17:43:55.474173 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:55.974161592 +0000 UTC m=+105.771284826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.657123 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.658527 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.658606 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.659531 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:55 crc kubenswrapper[5116]: E1208 17:43:55.660004 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.159983443 +0000 UTC m=+105.957106677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:55 crc kubenswrapper[5116]: I1208 17:43:55.787587 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:55 crc kubenswrapper[5116]: E1208 17:43:55.787970 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.287957839 +0000 UTC m=+106.085081073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:55.889761 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:55.890626 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.390594873 +0000 UTC m=+106.187718107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:55.996103 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:55.996424 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.496412011 +0000 UTC m=+106.293535235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.020277 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.037904 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" event={"ID":"0267ec2e-8f60-4739-aae7-2a133c6f2809","Type":"ContainerStarted","Data":"388ee51572600c33ca5ecc69f723f668e2773c29609fed03584c63f1ea3ca35e"} Dec 08 17:43:56 crc kubenswrapper[5116]: W1208 17:43:56.043807 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d60def_1a02_4598_801e_fc4fdfaabcf4.slice/crio-6b2937605e20a4d4e5066bd773cd723f6d04c0a4e6e8ab19130e6a2967d78fdd WatchSource:0}: Error finding container 6b2937605e20a4d4e5066bd773cd723f6d04c0a4e6e8ab19130e6a2967d78fdd: Status 404 returned error can't find the container with id 6b2937605e20a4d4e5066bd773cd723f6d04c0a4e6e8ab19130e6a2967d78fdd Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.059755 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.073778 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" event={"ID":"4f7ef3d6-0bc3-4566-8735-c4a2389d4c84","Type":"ContainerStarted","Data":"cdcb86182a73f3758d67882953a586a9eddca1292c7415392c53b2a269cedc72"} Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.122162 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.122401 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.622356053 +0000 UTC m=+106.419479287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.123002 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.124303 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.624221142 +0000 UTC m=+106.421344376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.185911 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.188672 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b"] Dec 08 17:43:56 crc kubenswrapper[5116]: W1208 17:43:56.194216 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5179cd_4e4d_401c_af27_a30fc32e5146.slice/crio-1ef96e7b0a50816d09a4f16ea251718217e53291ebb3ba83777838a145cd19ae WatchSource:0}: Error finding container 1ef96e7b0a50816d09a4f16ea251718217e53291ebb3ba83777838a145cd19ae: Status 404 returned error can't find the container with id 1ef96e7b0a50816d09a4f16ea251718217e53291ebb3ba83777838a145cd19ae Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.224123 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.225962 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.725897062 +0000 UTC m=+106.523020296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.328919 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.329550 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.829534322 +0000 UTC m=+106.626657556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.430107 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.430338 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.930303648 +0000 UTC m=+106.727426922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.430750 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.431134 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:56.931126489 +0000 UTC m=+106.728249733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.535954 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.536337 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.03629861 +0000 UTC m=+106.833421894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.536837 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.537365 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.037343297 +0000 UTC m=+106.834466571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.637866 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.638538 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.138516904 +0000 UTC m=+106.935640158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: W1208 17:43:56.663306 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3dc86ab_217d_4d86_8381_16465ee204c8.slice/crio-7b265fc48e5bffd954e8b9a21bed834defd4d0ddfdb683fcdfe838edba649e87 WatchSource:0}: Error finding container 7b265fc48e5bffd954e8b9a21bed834defd4d0ddfdb683fcdfe838edba649e87: Status 404 returned error can't find the container with id 7b265fc48e5bffd954e8b9a21bed834defd4d0ddfdb683fcdfe838edba649e87 Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.663432 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:56 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:43:56 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:43:56 crc kubenswrapper[5116]: healthz check failed Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.663477 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.666869 5116 patch_prober.go:28] interesting pod/console-operator-67c89758df-wbzsx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.666959 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" podUID="d4f231ba-ace1-4242-a8cb-04e2904f95e9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.667930 5116 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-w2582 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.667965 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" podUID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.685263 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.685529 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.714566 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.714670 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.716718 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.723313 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.726755 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" podStartSLOduration=85.726705022 podStartE2EDuration="1m25.726705022s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:56.715816049 +0000 UTC m=+106.512939283" watchObservedRunningTime="2025-12-08 17:43:56.726705022 +0000 UTC m=+106.523828256" Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.744813 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.756865 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.758510 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.258484721 +0000 UTC m=+107.055607955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.762542 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.791302 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-4msk8"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.791374 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.801052 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-9xp42"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.831658 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.836332 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tlxxd"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.850360 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.858048 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-g68gs"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.863490 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.864426 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.364408731 +0000 UTC m=+107.161531965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.866710 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr"] Dec 08 17:43:56 crc kubenswrapper[5116]: W1208 17:43:56.868036 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4183c4d_f709_4d5b_a9a4_180284f37cc8.slice/crio-fdff9e942f11df2937a23ca0423cb47a0ca2bba7fb62d681e6193f708bd3fb64 WatchSource:0}: Error finding container fdff9e942f11df2937a23ca0423cb47a0ca2bba7fb62d681e6193f708bd3fb64: Status 404 returned error can't find the container with id fdff9e942f11df2937a23ca0423cb47a0ca2bba7fb62d681e6193f708bd3fb64 Dec 08 17:43:56 crc kubenswrapper[5116]: W1208 17:43:56.881665 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7623dbe_7780_43e2_8225_9cf0b9e83951.slice/crio-5e9134e0a6516f95988ab9c8e9b844ad05a95cba6bc8715e362b0e2addb80695 WatchSource:0}: Error finding container 5e9134e0a6516f95988ab9c8e9b844ad05a95cba6bc8715e362b0e2addb80695: Status 404 returned error can't find the container with id 5e9134e0a6516f95988ab9c8e9b844ad05a95cba6bc8715e362b0e2addb80695 Dec 08 17:43:56 crc kubenswrapper[5116]: W1208 17:43:56.887113 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15fa3f1d_6230_4602_a46a_1f9b84a147fa.slice/crio-8fa9f90154185fb2b06c6171ce73ce34db5add725016d184d16509cbfdb745f8 WatchSource:0}: Error finding container 8fa9f90154185fb2b06c6171ce73ce34db5add725016d184d16509cbfdb745f8: Status 404 returned error can't find the container with id 8fa9f90154185fb2b06c6171ce73ce34db5add725016d184d16509cbfdb745f8 Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.888308 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j"] Dec 08 17:43:56 crc kubenswrapper[5116]: W1208 17:43:56.889623 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb4d104_2982_4fda_904d_860b430ccc30.slice/crio-5b97f0243ede28048b3c65af2e873033a40ff606ad8e7f66d8f4f49f78cfd526 WatchSource:0}: Error finding container 5b97f0243ede28048b3c65af2e873033a40ff606ad8e7f66d8f4f49f78cfd526: Status 404 returned error can't find the container with id 5b97f0243ede28048b3c65af2e873033a40ff606ad8e7f66d8f4f49f78cfd526 Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.894188 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.901852 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.905916 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9bt88"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.911520 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.923010 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdrb8"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.923090 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c642w"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.927116 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68"] Dec 08 17:43:56 crc kubenswrapper[5116]: I1208 17:43:56.987546 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:56 crc kubenswrapper[5116]: E1208 17:43:56.987906 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.487889649 +0000 UTC m=+107.285012883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.089134 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.089817 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.589792445 +0000 UTC m=+107.386915679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.130486 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c642w" event={"ID":"a34224e8-837b-4261-8b6d-3e9273996375","Type":"ContainerStarted","Data":"19dea6c616f009776ff9caa9c023cd9cdf609d74de53399ecedc21609c7a875d"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.141181 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" event={"ID":"dc8a8f38-928e-445a-b2d0-56c91cff7483","Type":"ContainerStarted","Data":"5fabfdddb682043d6778d1f166312842d0ac1c778e75c90c0f0b3466f9c1ea43"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.177177 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" event={"ID":"fdb4d104-2982-4fda-904d-860b430ccc30","Type":"ContainerStarted","Data":"5b97f0243ede28048b3c65af2e873033a40ff606ad8e7f66d8f4f49f78cfd526"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.192487 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.193375 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.693353953 +0000 UTC m=+107.490477187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.234991 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" event={"ID":"88c9e622-e4c0-49b0-a481-fb6e32fc0505","Type":"ContainerStarted","Data":"4f3fd242206119ccebf450955d4c5eab5691cec30d5fd3c358cef9a5018f4435"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.244788 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" event={"ID":"d3dc86ab-217d-4d86-8381-16465ee204c8","Type":"ContainerStarted","Data":"7b265fc48e5bffd954e8b9a21bed834defd4d0ddfdb683fcdfe838edba649e87"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.262596 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-4msk8" event={"ID":"e71c8014-5266-4483-8037-e8d9e7995c1b","Type":"ContainerStarted","Data":"91cc675e18c87381d89df0ab3058703a35c0d69a4b795fb844b3fdc567d80207"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.278427 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" event={"ID":"c60dd3fb-226c-4117-a898-4efde2c99ca8","Type":"ContainerStarted","Data":"86b3ad6f45cda75a80cbd97e31d1cf78541f93bed5a48b4a1ab30c9a76cae9e6"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.296082 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.297662 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.79761392 +0000 UTC m=+107.594737164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.302352 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" event={"ID":"2a5179cd-4e4d-401c-af27-a30fc32e5146","Type":"ContainerStarted","Data":"1896f64a8dd2f956c3fdd0082a120d19081a4d873902fa40c983fbee9ad6191d"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.302407 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" event={"ID":"2a5179cd-4e4d-401c-af27-a30fc32e5146","Type":"ContainerStarted","Data":"1ef96e7b0a50816d09a4f16ea251718217e53291ebb3ba83777838a145cd19ae"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.305315 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-g68gs" event={"ID":"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51","Type":"ContainerStarted","Data":"3b9965fac95f20d412af5a94e6529cf4ccf9a320c755f2362a82bb84f3ed12b4"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.307643 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" event={"ID":"b7623dbe-7780-43e2-8225-9cf0b9e83951","Type":"ContainerStarted","Data":"5e9134e0a6516f95988ab9c8e9b844ad05a95cba6bc8715e362b0e2addb80695"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.316767 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" event={"ID":"5c183f4a-f4d4-4584-916d-1055aa64de78","Type":"ContainerStarted","Data":"63e5b826381891a3453da72981c635039dfd58b3c280730f19f40f2267eb2f17"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.337712 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" event={"ID":"b6973f5d-d174-4643-814d-e929acd898ba","Type":"ContainerStarted","Data":"8adcf783d94616521d07341f15dff6508628f715b7a7253ee0a712fc71971a2e"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.379775 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" event={"ID":"94ce1afb-999a-45b5-847a-b3a71aa87c89","Type":"ContainerStarted","Data":"6e236f8c2b1ba181e66e9d16153c484cd86c448c2028799539e6cd3d0d65e602"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.393961 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" event={"ID":"64540db1-3951-4714-a14a-542d29e00e3c","Type":"ContainerStarted","Data":"fe8f653ef018072731e0af305b555f65567d207d412d799bcb57d7c1d4a92471"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.400129 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.400538 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:57.900521152 +0000 UTC m=+107.697644386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.406191 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" event={"ID":"a4183c4d-f709-4d5b-a9a4-180284f37cc8","Type":"ContainerStarted","Data":"fdff9e942f11df2937a23ca0423cb47a0ca2bba7fb62d681e6193f708bd3fb64"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.410981 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tlxxd" event={"ID":"2d2b0fc4-9619-4e70-92a9-06896ea298f4","Type":"ContainerStarted","Data":"6a94b6066f390e07164947f7e80510e85ba761e43e970dc219af37e0f3f98d74"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.417718 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-wf75r" podStartSLOduration=86.417692629 podStartE2EDuration="1m26.417692629s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:57.417189446 +0000 UTC m=+107.214312680" watchObservedRunningTime="2025-12-08 17:43:57.417692629 +0000 UTC m=+107.214815863" Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.440631 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" event={"ID":"56b4bc0e-8c13-439e-9293-f70b35418ce0","Type":"ContainerStarted","Data":"4b0c796e6bd60091a2ffb8e34b0acc8986951f8b693548e5f41a1360eea65f8e"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.489104 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" event={"ID":"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9","Type":"ContainerStarted","Data":"98ab69b25627d4c6bcd5b42ecec095da40749c94f8f0652dd1e3193699ed5edb"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.491518 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" event={"ID":"b9141db3-856a-4cae-ad18-f0eb4a53c8f8","Type":"ContainerStarted","Data":"162a1ddeb896774d975a7a861c55cd000d68bedeb25e7f4c1e846975985581ca"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.501535 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" event={"ID":"d63661bb-2db9-4a87-ae13-21a004bf32a9","Type":"ContainerStarted","Data":"c633b6f5aea9447c568a6d6cc3691e993d08a967326024e4968a27495d5f11f9"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.501631 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.501787 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.00176342 +0000 UTC m=+107.798886654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.502677 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.503161 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.003153226 +0000 UTC m=+107.800276460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.505397 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" event={"ID":"ae32fe26-12f4-4893-b748-d39bf6908a5f","Type":"ContainerStarted","Data":"bf6e176fae244478c0878e80606963139e12d67fd017e02be8424fc212e34346"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.506909 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" event={"ID":"b1d60def-1a02-4598-801e-fc4fdfaabcf4","Type":"ContainerStarted","Data":"6b2937605e20a4d4e5066bd773cd723f6d04c0a4e6e8ab19130e6a2967d78fdd"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.510967 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" event={"ID":"15fa3f1d-6230-4602-a46a-1f9b84a147fa","Type":"ContainerStarted","Data":"8fa9f90154185fb2b06c6171ce73ce34db5add725016d184d16509cbfdb745f8"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.536989 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" event={"ID":"5016d861-0431-4e4a-bbe3-c7032eb529c7","Type":"ContainerStarted","Data":"2873387db3975c88d4552fd296bc26beb3ac069722b7ce35c953489488fd8d92"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.553708 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" event={"ID":"cc5709c8-d943-4d2c-bd51-bdf689fe3714","Type":"ContainerStarted","Data":"3e7e33f68c9906d1ffddf5642d45d63a8430f57f53fe5d5ec714ddca96b39834"} Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.598621 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.603904 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.604174 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.104128228 +0000 UTC m=+107.901251462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.604572 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.605406 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.105394241 +0000 UTC m=+107.902517475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.626779 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:57 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:43:57 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:43:57 crc kubenswrapper[5116]: healthz check failed Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.627065 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.719018 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.723869 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.223844568 +0000 UTC m=+108.020967802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.729531 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.734965 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.234946466 +0000 UTC m=+108.032069690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.832229 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.832692 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.332619032 +0000 UTC m=+108.129742276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.862781 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" podStartSLOduration=85.862744378 podStartE2EDuration="1m25.862744378s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:57.860744555 +0000 UTC m=+107.657867779" watchObservedRunningTime="2025-12-08 17:43:57.862744378 +0000 UTC m=+107.659867612" Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.888549 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qsgps"] Dec 08 17:43:57 crc kubenswrapper[5116]: I1208 17:43:57.938103 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:57 crc kubenswrapper[5116]: E1208 17:43:57.938593 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.438575774 +0000 UTC m=+108.235699008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.039222 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.039636 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.539584366 +0000 UTC m=+108.336707600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.040489 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.040919 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.54090059 +0000 UTC m=+108.338023824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.142525 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.142964 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.642933999 +0000 UTC m=+108.440057233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.262969 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.263742 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.763714596 +0000 UTC m=+108.560837830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.364233 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.364847 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.864830401 +0000 UTC m=+108.661953635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.466386 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.466805 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:58.966784308 +0000 UTC m=+108.763907542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.568676 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.068655704 +0000 UTC m=+108.865778938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.568580 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.569687 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.570100 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.070081541 +0000 UTC m=+108.867204775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.625781 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:58 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:43:58 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:43:58 crc kubenswrapper[5116]: healthz check failed Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.625862 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.670801 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.671400 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.17138051 +0000 UTC m=+108.968503744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.772476 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.773026 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.273001649 +0000 UTC m=+109.070124883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.798741 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" event={"ID":"b1d60def-1a02-4598-801e-fc4fdfaabcf4","Type":"ContainerStarted","Data":"7d28338178fef0e82cb04a2b146170364454bac9dd67c9a22cf977596733989b"} Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.817340 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" event={"ID":"cc5709c8-d943-4d2c-bd51-bdf689fe3714","Type":"ContainerStarted","Data":"2c189234e8e84ba2d811c2919c808f69c22a3eb146495d7027dde4ddbb7e3155"} Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.817753 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.820101 5116 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-6dz8n container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.820217 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" podUID="cc5709c8-d943-4d2c-bd51-bdf689fe3714" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.827057 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35844: no serving certificate available for the kubelet" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.850503 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c642w" event={"ID":"a34224e8-837b-4261-8b6d-3e9273996375","Type":"ContainerStarted","Data":"c75143d49cfe581dded9aa413d162b4774c4a93e8146ae1d3856d3d08832a88e"} Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.877357 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.879017 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.378987081 +0000 UTC m=+109.176110315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.889034 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" event={"ID":"dc8a8f38-928e-445a-b2d0-56c91cff7483","Type":"ContainerStarted","Data":"2a320559c55c3849d27ba5a5fcc52056d70098deb2e8cc9a6ddf82aa571aefee"} Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.889303 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-c642w" podStartSLOduration=10.889285549 podStartE2EDuration="10.889285549s" podCreationTimestamp="2025-12-08 17:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:58.887486853 +0000 UTC m=+108.684610087" watchObservedRunningTime="2025-12-08 17:43:58.889285549 +0000 UTC m=+108.686408783" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.901862 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" event={"ID":"fdb4d104-2982-4fda-904d-860b430ccc30","Type":"ContainerStarted","Data":"280cd3d1c37a743957f5b8a425e2aabd5d1349dee45713eff2a8e6c5bdf16934"} Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.902647 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.906416 5116 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-w5bp9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.906505 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" podUID="fdb4d104-2982-4fda-904d-860b430ccc30" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.926391 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" event={"ID":"d3dc86ab-217d-4d86-8381-16465ee204c8","Type":"ContainerStarted","Data":"22e045eab1a59f2725fab140e2c484c2c761038f2b9fdc6ae6490d17281e6fb5"} Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.929181 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35858: no serving certificate available for the kubelet" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.937973 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" podStartSLOduration=87.937953988 podStartE2EDuration="1m27.937953988s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:58.937745671 +0000 UTC m=+108.734868905" watchObservedRunningTime="2025-12-08 17:43:58.937953988 +0000 UTC m=+108.735077222" Dec 08 17:43:58 crc kubenswrapper[5116]: I1208 17:43:58.981739 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:58 crc kubenswrapper[5116]: E1208 17:43:58.982750 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.482737375 +0000 UTC m=+109.279860609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.012852 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-4msk8" event={"ID":"e71c8014-5266-4483-8037-e8d9e7995c1b","Type":"ContainerStarted","Data":"41d5d16f8c49b62e4ed97c27dad7f98ffd3b757b2e08de3d43ba27ba87f20c52"} Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.013133 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.022720 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.022831 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.023420 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" podStartSLOduration=87.023396634 podStartE2EDuration="1m27.023396634s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:59.021185906 +0000 UTC m=+108.818309140" watchObservedRunningTime="2025-12-08 17:43:59.023396634 +0000 UTC m=+108.820519868" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.025094 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35868: no serving certificate available for the kubelet" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.121292 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" event={"ID":"c60dd3fb-226c-4117-a898-4efde2c99ca8","Type":"ContainerStarted","Data":"560c5a2e5fe2de8d95bdfa939944d79ccde2c6e307d6ebb28516d4f73b37269e"} Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.147332 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-4msk8" podStartSLOduration=88.147305303 podStartE2EDuration="1m28.147305303s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:59.070880132 +0000 UTC m=+108.868003376" watchObservedRunningTime="2025-12-08 17:43:59.147305303 +0000 UTC m=+108.944428537" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.147694 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.149681 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.649652184 +0000 UTC m=+109.446775418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.152893 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35878: no serving certificate available for the kubelet" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.243296 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35882: no serving certificate available for the kubelet" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.249366 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.249716 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.749702401 +0000 UTC m=+109.546825625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.350796 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.351311 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.851292639 +0000 UTC m=+109.648415873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.456016 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.456590 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:59.956566822 +0000 UTC m=+109.753690056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.480505 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35896: no serving certificate available for the kubelet" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.495477 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" event={"ID":"5c183f4a-f4d4-4584-916d-1055aa64de78","Type":"ContainerStarted","Data":"b7de25dbf79ac5b3aea5d956bc7f2bfdae1803ecd56c1617834566bba4399de7"} Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.521167 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" event={"ID":"b6973f5d-d174-4643-814d-e929acd898ba","Type":"ContainerStarted","Data":"5559006c050facf49c08891a8e499dd2bb76b3bbf6c5b4c65525a02b8b9eab08"} Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.540402 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" event={"ID":"b9141db3-856a-4cae-ad18-f0eb4a53c8f8","Type":"ContainerStarted","Data":"6a7103df56e201a016162998ba1f19565154312e031ad0d69e47c417b4d1616b"} Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.541364 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" gracePeriod=30 Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.557528 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.557936 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.057919903 +0000 UTC m=+109.855043137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.579607 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jcz8f" podStartSLOduration=87.579589478 podStartE2EDuration="1m27.579589478s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:59.578502269 +0000 UTC m=+109.375625513" watchObservedRunningTime="2025-12-08 17:43:59.579589478 +0000 UTC m=+109.376712722" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.582342 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-hjdgl" podStartSLOduration=88.582323819 podStartE2EDuration="1m28.582323819s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:59.148367901 +0000 UTC m=+108.945491165" watchObservedRunningTime="2025-12-08 17:43:59.582323819 +0000 UTC m=+109.379447053" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.645011 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:59 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:43:59 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:43:59 crc kubenswrapper[5116]: healthz check failed Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.645095 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.686376 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.687280 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.688117 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.188103415 +0000 UTC m=+109.985226649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.689089 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.741287 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.741345 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.759516 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.794401 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.795447 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.295431883 +0000 UTC m=+110.092555117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:59 crc kubenswrapper[5116]: I1208 17:43:59.896295 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:43:59 crc kubenswrapper[5116]: E1208 17:43:59.896801 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.396786003 +0000 UTC m=+110.193909237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:43:59.996707 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35906: no serving certificate available for the kubelet" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:43:59.997859 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:43:59.998233 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.498209997 +0000 UTC m=+110.295333231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.059643 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.160186 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.162288 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.662272562 +0000 UTC m=+110.459395796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.284931 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.285687 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.785657778 +0000 UTC m=+110.582781012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.387232 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.387691 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.887677777 +0000 UTC m=+110.684801011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.434232 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35916: no serving certificate available for the kubelet" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.488869 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.489114 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.989070899 +0000 UTC m=+110.786194133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.489588 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.490072 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:00.990053484 +0000 UTC m=+110.787176718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.510127 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.510170 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.516805 5116 patch_prober.go:28] interesting pod/console-64d44f6ddf-l4b2c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.516888 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-l4b2c" podUID="eaf2ae84-8492-41c0-b678-ab302371258a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.592612 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.593123 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.093068709 +0000 UTC m=+110.890191943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.595605 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.600495 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.100467182 +0000 UTC m=+110.897590416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.624309 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" event={"ID":"5016d861-0431-4e4a-bbe3-c7032eb529c7","Type":"ContainerStarted","Data":"1831782f71925b59e750ee1487f088c5969fde5afdd2f3887e0913ef36ff2390"} Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.624597 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.630138 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:00 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:00 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:00 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.630199 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.630613 5116 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6s8th container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.630641 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" podUID="5016d861-0431-4e4a-bbe3-c7032eb529c7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.719712 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" event={"ID":"88c9e622-e4c0-49b0-a481-fb6e32fc0505","Type":"ContainerStarted","Data":"91894b1241711b6e80a9d436990ada85145a4970f2db7b5baeb9c45f5e587bee"} Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.726638 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.727458 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.227434431 +0000 UTC m=+111.024557665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.830126 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.837891 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" event={"ID":"d3dc86ab-217d-4d86-8381-16465ee204c8","Type":"ContainerStarted","Data":"7c5c5fc38dfdc7e51ced8cb18a68f0a083d1d374220e52e422b87bfd1de721ea"} Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.838619 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" event={"ID":"2a5179cd-4e4d-401c-af27-a30fc32e5146","Type":"ContainerStarted","Data":"75ffd5e088194df51410e745ebfdddfdc564e1818b3b234a00ebca91e5619f50"} Dec 08 17:44:00 crc kubenswrapper[5116]: E1208 17:44:00.839549 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.339522392 +0000 UTC m=+111.136645626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.897563 5116 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-b2n2w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]log ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]etcd ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/max-in-flight-filter ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 17:44:00 crc kubenswrapper[5116]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 17:44:00 crc kubenswrapper[5116]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/openshift.io-startinformers ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 17:44:00 crc kubenswrapper[5116]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 17:44:00 crc kubenswrapper[5116]: livez check failed Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.898007 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" podUID="4f7ef3d6-0bc3-4566-8735-c4a2389d4c84" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:00 crc kubenswrapper[5116]: I1208 17:44:00.898415 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-g68gs" event={"ID":"3c5d3e24-1c23-4245-bf46-d6ba11dfbd51","Type":"ContainerStarted","Data":"c8924e5faf1e4d78e8e04362eadc8f72e52cb13e125422ee36d269f9eb284089"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.066640 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.068119 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.568100395 +0000 UTC m=+111.365223629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.069443 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.069477 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" podStartSLOduration=89.069459731 podStartE2EDuration="1m29.069459731s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:00.879886944 +0000 UTC m=+110.677010178" watchObservedRunningTime="2025-12-08 17:44:01.069459731 +0000 UTC m=+110.866582965" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.069504 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.116728 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" event={"ID":"b7623dbe-7780-43e2-8225-9cf0b9e83951","Type":"ContainerStarted","Data":"6550a196c4150caf4da8b8b06deeccc1d94aac0ce3a696edc88639a6a3c99d6e"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.154910 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" event={"ID":"5c183f4a-f4d4-4584-916d-1055aa64de78","Type":"ContainerStarted","Data":"f4966476fc992f190c4405ef5b21b974af172120df4c3ceb6a6d91f8ff9c7f06"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.157382 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" event={"ID":"b6973f5d-d174-4643-814d-e929acd898ba","Type":"ContainerStarted","Data":"c5cc241f22a1aa8034c81bc5267e56a33f404aac74c068b00ff40e2313a3c9fe"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.159235 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" event={"ID":"64540db1-3951-4714-a14a-542d29e00e3c","Type":"ContainerStarted","Data":"85eebf25edddc96a11999350cd7e13eadc455bed25c0da6e2cf8b0af8b652569"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.161492 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" event={"ID":"a4183c4d-f709-4d5b-a9a4-180284f37cc8","Type":"ContainerStarted","Data":"6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.162269 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.163438 5116 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-mv9qd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.163476 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.164925 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tlxxd" event={"ID":"2d2b0fc4-9619-4e70-92a9-06896ea298f4","Type":"ContainerStarted","Data":"44b181c534d7e1d2797658bca56029a3bf5423de9e4e86c9193134fac1d580b9"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.221410 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" event={"ID":"3ae931b0-256c-4067-8c1c-a56b5fe1f5f9","Type":"ContainerStarted","Data":"a1ea23b29cc541ae61dc1db032678b9dce0dcd3a0f1d0c2ea37ef8d69fded4eb"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.223816 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" event={"ID":"d63661bb-2db9-4a87-ae13-21a004bf32a9","Type":"ContainerStarted","Data":"c6f69319990b9e83ba9ac52cf87a87aee6c9a537b0785d7f8512d4c43fc2556f"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.225598 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" event={"ID":"ae32fe26-12f4-4893-b748-d39bf6908a5f","Type":"ContainerStarted","Data":"25066d5a0a1b59c79437286a13ac96caebb256cfa3e41b6c16d3cd55945a618d"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.227653 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" event={"ID":"b1d60def-1a02-4598-801e-fc4fdfaabcf4","Type":"ContainerStarted","Data":"5fa2779225a1cae5a9bb17ea2d5c72ec1643f2b747ce7d6f592bc94022336795"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.228093 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.230354 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" event={"ID":"15fa3f1d-6230-4602-a46a-1f9b84a147fa","Type":"ContainerStarted","Data":"c66016cf695a810aca5e199664483df7f3baf4f1ec06a862c98ba24050578c94"} Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.237732 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.237779 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.246223 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.246955 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.746930479 +0000 UTC m=+111.544053723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.266377 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nlzmf" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.304025 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6dz8n" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.342488 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-w5bp9" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.352022 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.360323 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.860267723 +0000 UTC m=+111.657390987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.467160 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.467712 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:01.967695181 +0000 UTC m=+111.764818425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.574567 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.574934 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.074917806 +0000 UTC m=+111.872041040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.624070 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.631567 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:01 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:01 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:01 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.631660 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.676701 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.677439 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.177419027 +0000 UTC m=+111.974542271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.855930 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.858364 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.358344031 +0000 UTC m=+112.155467265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.860802 5116 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6s8th container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.860836 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" podUID="5016d861-0431-4e4a-bbe3-c7032eb529c7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 08 17:44:01 crc kubenswrapper[5116]: I1208 17:44:01.961021 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:01 crc kubenswrapper[5116]: E1208 17:44:01.961441 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.461428088 +0000 UTC m=+112.258551322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.077289 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35920: no serving certificate available for the kubelet" Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.078875 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.079829 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.579781023 +0000 UTC m=+112.376904257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.181223 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.181697 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.681682388 +0000 UTC m=+112.478805622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.290588 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.291310 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.791288405 +0000 UTC m=+112.588411639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.293128 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pp68b" podStartSLOduration=90.293095152 podStartE2EDuration="1m30.293095152s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:01.384991316 +0000 UTC m=+111.182114560" watchObservedRunningTime="2025-12-08 17:44:02.293095152 +0000 UTC m=+112.090218386" Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.334203 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tlxxd" event={"ID":"2d2b0fc4-9619-4e70-92a9-06896ea298f4","Type":"ContainerStarted","Data":"210ce12b4cbc13c16096b3550f74c2b6aa463bd958f3b79fe97cb28715a92c1d"} Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.334279 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-tlxxd" Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.366209 5116 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6s8th container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.366297 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" podUID="5016d861-0431-4e4a-bbe3-c7032eb529c7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.393313 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.422930 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:02.922902975 +0000 UTC m=+112.720026209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.471902 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-lnhtb" podStartSLOduration=90.471878261 podStartE2EDuration="1m30.471878261s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.291832689 +0000 UTC m=+112.088955923" watchObservedRunningTime="2025-12-08 17:44:02.471878261 +0000 UTC m=+112.269001495" Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.497627 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.517107 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.017045388 +0000 UTC m=+112.814168622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.518512 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.519309 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.019293566 +0000 UTC m=+112.816416800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.529565 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-44l68" podStartSLOduration=90.529532813 podStartE2EDuration="1m30.529532813s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.521577926 +0000 UTC m=+112.318701160" watchObservedRunningTime="2025-12-08 17:44:02.529532813 +0000 UTC m=+112.326656047" Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.698487 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.698905 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.198863896 +0000 UTC m=+112.995987130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.699022 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:02 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:02 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:02 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.699123 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.800669 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.801585 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.301550551 +0000 UTC m=+113.098673785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.902942 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.903144 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.403116809 +0000 UTC m=+113.200240043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.903808 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:02 crc kubenswrapper[5116]: E1208 17:44:02.904406 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.40435478 +0000 UTC m=+113.201478014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:02 crc kubenswrapper[5116]: I1208 17:44:02.964078 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" podStartSLOduration=90.964058696 podStartE2EDuration="1m30.964058696s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:02.901965299 +0000 UTC m=+112.699088523" watchObservedRunningTime="2025-12-08 17:44:02.964058696 +0000 UTC m=+112.761181930" Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.015307 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:03 crc kubenswrapper[5116]: E1208 17:44:03.015769 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.515744723 +0000 UTC m=+113.312867947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.119606 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:03 crc kubenswrapper[5116]: E1208 17:44:03.120310 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.620290998 +0000 UTC m=+113.417414232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.198847 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-9xp42" podStartSLOduration=92.198819755 podStartE2EDuration="1m32.198819755s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:03.193384193 +0000 UTC m=+112.990507427" watchObservedRunningTime="2025-12-08 17:44:03.198819755 +0000 UTC m=+112.995942989" Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.281683 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:03 crc kubenswrapper[5116]: E1208 17:44:03.281976 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.78191836 +0000 UTC m=+113.579041594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.282424 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:03 crc kubenswrapper[5116]: E1208 17:44:03.283156 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.783131351 +0000 UTC m=+113.580254575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.337991 5116 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-mv9qd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.338096 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.342611 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" event={"ID":"88c9e622-e4c0-49b0-a481-fb6e32fc0505","Type":"ContainerStarted","Data":"42634dc2936a0e43761f9cdaf4037a7280c8ef9d9c19f4d84a9b529832e91bed"} Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.408372 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:03 crc kubenswrapper[5116]: E1208 17:44:03.409144 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:03.909113205 +0000 UTC m=+113.706236449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:03 crc kubenswrapper[5116]: I1208 17:44:03.516378 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:03 crc kubenswrapper[5116]: E1208 17:44:03.516722 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.016698048 +0000 UTC m=+113.813821282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.023759 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.028544 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.028495314 +0000 UTC m=+114.825618748 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.032509 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:04 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:04 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:04 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.032853 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.128490 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.128756 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.628642555 +0000 UTC m=+114.425765789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.129444 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.130018 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.63000585 +0000 UTC m=+114.427129084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.295442 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.296099 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.796000275 +0000 UTC m=+114.593123509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.331563 5116 ???:1] "http: TLS handshake error from 192.168.126.11:48378: no serving certificate available for the kubelet" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.341968 5116 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-mv9qd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.342054 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.396891 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.397346 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.897325457 +0000 UTC m=+114.694448761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.498600 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.498787 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:04.99876059 +0000 UTC m=+114.795883824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.499087 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.563215 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.063196009 +0000 UTC m=+114.860319233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.638350 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.638775 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.638842 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.638978 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.639046 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.640637 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.140587966 +0000 UTC m=+114.937711190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.640770 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.653109 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:04 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:04 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:04 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.654013 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.657475 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.683466 5116 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-b2n2w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]log ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]etcd ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/max-in-flight-filter ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 17:44:04 crc kubenswrapper[5116]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/openshift.io-startinformers ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 17:44:04 crc kubenswrapper[5116]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 17:44:04 crc kubenswrapper[5116]: livez check failed Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.683580 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" podUID="4f7ef3d6-0bc3-4566-8735-c4a2389d4c84" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.712726 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.740752 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.741301 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.24128252 +0000 UTC m=+115.038405754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.800833 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.845946 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.846757 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.847058 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.347012715 +0000 UTC m=+115.144135949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.848457 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.861624 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19151390-7d67-4ae9-8520-ae20b8eb46f8-metrics-certs\") pod \"network-metrics-daemon-5ft89\" (UID: \"19151390-7d67-4ae9-8520-ae20b8eb46f8\") " pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:44:04 crc kubenswrapper[5116]: I1208 17:44:04.948757 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:04 crc kubenswrapper[5116]: E1208 17:44:04.949366 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.449348802 +0000 UTC m=+115.246472036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.004103 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft89" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.012833 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.024506 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqm4p" podStartSLOduration=93.0244845 podStartE2EDuration="1m33.0244845s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:05.021557664 +0000 UTC m=+114.818680918" watchObservedRunningTime="2025-12-08 17:44:05.0244845 +0000 UTC m=+114.821607734" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.033355 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.038592 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.050492 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:05 crc kubenswrapper[5116]: E1208 17:44:05.050747 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.550694463 +0000 UTC m=+115.347817697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.051022 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:05 crc kubenswrapper[5116]: E1208 17:44:05.051501 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.551481514 +0000 UTC m=+115.348604918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.224112 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:05 crc kubenswrapper[5116]: E1208 17:44:05.225787 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.725754025 +0000 UTC m=+115.522877259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.329553 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:05 crc kubenswrapper[5116]: E1208 17:44:05.330235 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:05.830217217 +0000 UTC m=+115.627340451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.433317 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:05 crc kubenswrapper[5116]: E1208 17:44:05.514327 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.014295265 +0000 UTC m=+115.811418499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.528426 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nc4fk"] Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.612948 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:05 crc kubenswrapper[5116]: E1208 17:44:05.630486 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.130460921 +0000 UTC m=+115.927584145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.663183 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:05 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:05 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:05 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.663268 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.665558 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bnq4b"] Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.675855 5116 generic.go:358] "Generic (PLEG): container finished" podID="dc8a8f38-928e-445a-b2d0-56c91cff7483" containerID="2a320559c55c3849d27ba5a5fcc52056d70098deb2e8cc9a6ddf82aa571aefee" exitCode=0 Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.677884 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-zn9l4" podStartSLOduration=93.677867827 podStartE2EDuration="1m33.677867827s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:05.539215954 +0000 UTC m=+115.336339198" watchObservedRunningTime="2025-12-08 17:44:05.677867827 +0000 UTC m=+115.474991061" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.678381 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.678624 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.679638 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" event={"ID":"dc8a8f38-928e-445a-b2d0-56c91cff7483","Type":"ContainerDied","Data":"2a320559c55c3849d27ba5a5fcc52056d70098deb2e8cc9a6ddf82aa571aefee"} Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.679718 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7rqg8"] Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.697309 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mvhzm"] Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.697942 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:05 crc kubenswrapper[5116]: I1208 17:44:05.858501 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.018154 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.019603 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.519559241 +0000 UTC m=+116.316682465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.019730 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.019776 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-catalog-content\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.019819 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hspkc\" (UniqueName: \"kubernetes.io/projected/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-kube-api-access-hspkc\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.019847 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-utilities\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.020738 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.520727531 +0000 UTC m=+116.317850765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.086025 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.086466 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc4fk"] Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.086510 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bnq4b"] Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.086523 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvhzm"] Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.086697 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.097327 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rqg8"] Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.097235 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" podStartSLOduration=94.097214005 podStartE2EDuration="1m34.097214005s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:05.715789545 +0000 UTC m=+115.512912779" watchObservedRunningTime="2025-12-08 17:44:06.097214005 +0000 UTC m=+115.894337239" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.112159 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.118655 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.126699 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.128462 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.628440878 +0000 UTC m=+116.425564112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128514 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-utilities\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128548 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-utilities\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128570 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-utilities\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128593 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzd4j\" (UniqueName: \"kubernetes.io/projected/d7968a24-caaf-4115-992d-3678c03e895a-kube-api-access-hzd4j\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128630 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128651 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-catalog-content\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128694 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-catalog-content\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128714 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-utilities\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128733 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128767 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128795 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-catalog-content\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128822 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7cqj\" (UniqueName: \"kubernetes.io/projected/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-kube-api-access-t7cqj\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128846 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-catalog-content\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128878 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm24p\" (UniqueName: \"kubernetes.io/projected/0687a333-2a42-4237-9673-e0210c45dc22-kube-api-access-wm24p\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.128909 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hspkc\" (UniqueName: \"kubernetes.io/projected/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-kube-api-access-hspkc\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.131602 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.131903 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-utilities\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.133337 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-catalog-content\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.133413 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.145195 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.645172055 +0000 UTC m=+116.442295289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.146338 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.294666 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.294960 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.794910517 +0000 UTC m=+116.592033751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.295221 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.295299 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t7cqj\" (UniqueName: \"kubernetes.io/projected/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-kube-api-access-t7cqj\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.295318 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-catalog-content\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.295381 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wm24p\" (UniqueName: \"kubernetes.io/projected/0687a333-2a42-4237-9673-e0210c45dc22-kube-api-access-wm24p\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.295454 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-utilities\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296108 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-catalog-content\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296171 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-utilities\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296218 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzd4j\" (UniqueName: \"kubernetes.io/projected/d7968a24-caaf-4115-992d-3678c03e895a-kube-api-access-hzd4j\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296320 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296419 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-catalog-content\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296547 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-catalog-content\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296580 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-utilities\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.296605 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.297067 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-utilities\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.297407 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-catalog-content\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.297461 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.297772 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-utilities\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.297919 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-catalog-content\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.299216 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-utilities\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.301913 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.801892939 +0000 UTC m=+116.599016283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.335994 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-gk2ww" podStartSLOduration=94.335980147 podStartE2EDuration="1m34.335980147s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:06.145986836 +0000 UTC m=+115.943110070" watchObservedRunningTime="2025-12-08 17:44:06.335980147 +0000 UTC m=+116.133103381" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.379178 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm24p\" (UniqueName: \"kubernetes.io/projected/0687a333-2a42-4237-9673-e0210c45dc22-kube-api-access-wm24p\") pod \"certified-operators-7rqg8\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.396473 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.397324 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.397601 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:06.897586122 +0000 UTC m=+116.694709356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.403486 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7cqj\" (UniqueName: \"kubernetes.io/projected/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-kube-api-access-t7cqj\") pod \"certified-operators-bnq4b\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.442642 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vcg8j" podStartSLOduration=94.442626617 podStartE2EDuration="1m34.442626617s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:06.339606812 +0000 UTC m=+116.136730046" watchObservedRunningTime="2025-12-08 17:44:06.442626617 +0000 UTC m=+116.239749851" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.499198 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n9kbg" podStartSLOduration=94.499166469 podStartE2EDuration="1m34.499166469s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:06.445039139 +0000 UTC m=+116.242162373" watchObservedRunningTime="2025-12-08 17:44:06.499166469 +0000 UTC m=+116.296289703" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.503824 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.504303 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.004285833 +0000 UTC m=+116.801409077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.560574 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-g68gs" podStartSLOduration=94.560552569 podStartE2EDuration="1m34.560552569s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:06.554920362 +0000 UTC m=+116.352043596" watchObservedRunningTime="2025-12-08 17:44:06.560552569 +0000 UTC m=+116.357675803" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.563038 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-57tcr" podStartSLOduration=94.563027914 podStartE2EDuration="1m34.563027914s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:06.502595078 +0000 UTC m=+116.299718312" watchObservedRunningTime="2025-12-08 17:44:06.563027914 +0000 UTC m=+116.360151158" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.582785 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hspkc\" (UniqueName: \"kubernetes.io/projected/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-kube-api-access-hspkc\") pod \"community-operators-nc4fk\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.595546 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.615741 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.616208 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.116187369 +0000 UTC m=+116.913310603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.619613 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzd4j\" (UniqueName: \"kubernetes.io/projected/d7968a24-caaf-4115-992d-3678c03e895a-kube-api-access-hzd4j\") pod \"community-operators-mvhzm\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.680029 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-wbzsx" Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.680462 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.716179 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.718143 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.726631 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.226612496 +0000 UTC m=+117.023735730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.728217 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.728296 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.760618 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.777022 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.811734 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.826318 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.827103 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.327078414 +0000 UTC m=+117.124201648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.827185 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.827888 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.327881386 +0000 UTC m=+117.125004620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.845513 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:06 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:06 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:06 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.845581 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.856461 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:06 crc kubenswrapper[5116]: I1208 17:44:06.928998 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:06 crc kubenswrapper[5116]: E1208 17:44:06.929273 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.429256837 +0000 UTC m=+117.226380071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.074956 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:07 crc kubenswrapper[5116]: E1208 17:44:07.075445 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.575422477 +0000 UTC m=+117.372545711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.075429 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-9bt88" podStartSLOduration=95.075397795 podStartE2EDuration="1m35.075397795s" podCreationTimestamp="2025-12-08 17:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:07.074456591 +0000 UTC m=+116.871579845" watchObservedRunningTime="2025-12-08 17:44:07.075397795 +0000 UTC m=+116.872521029" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.115280 5116 ???:1] "http: TLS handshake error from 192.168.126.11:48390: no serving certificate available for the kubelet" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.145697 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4pqqp"] Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.201730 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:07 crc kubenswrapper[5116]: E1208 17:44:07.202816 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.702220551 +0000 UTC m=+117.499343795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.226745 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.464139 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.465209 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.465281 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-catalog-content\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.465338 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2wct\" (UniqueName: \"kubernetes.io/projected/ab873de1-8a57-4411-a552-1567537bdc67-kube-api-access-m2wct\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.465591 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-utilities\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: E1208 17:44:07.472993 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:07.972926067 +0000 UTC m=+117.770049301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.473471 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pqqp"] Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.650131 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:07 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:07 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:07 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.650381 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.760358 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7j2rd"] Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.812045 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.812422 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m2wct\" (UniqueName: \"kubernetes.io/projected/ab873de1-8a57-4411-a552-1567537bdc67-kube-api-access-m2wct\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.812467 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-utilities\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.812544 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-catalog-content\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: E1208 17:44:07.812721 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:08.31269574 +0000 UTC m=+118.109818974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.813800 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-utilities\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.818287 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.819138 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f2qkz"] Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.836305 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-catalog-content\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.845097 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.927734 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4rcp\" (UniqueName: \"kubernetes.io/projected/088af58f-5679-42e6-9595-945ee162f862-kube-api-access-n4rcp\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.927809 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-utilities\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.927902 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.927937 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-catalog-content\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:07 crc kubenswrapper[5116]: E1208 17:44:07.928490 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:08.428471867 +0000 UTC m=+118.225595101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.947688 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" event={"ID":"56b4bc0e-8c13-439e-9293-f70b35418ce0","Type":"ContainerStarted","Data":"7b0e34723ecacb6bf0f9be52974b81b0f794224ffcbfefe181c3fa053c193eba"} Dec 08 17:44:07 crc kubenswrapper[5116]: I1208 17:44:07.969985 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.013851 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j2rd"] Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.031770 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdgp8"] Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.034096 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.034423 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-utilities\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.034523 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-utilities\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.034549 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-catalog-content\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.034576 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbccj\" (UniqueName: \"kubernetes.io/projected/7d3964d8-860a-448a-ba5c-309e5343333e-kube-api-access-wbccj\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.034591 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-catalog-content\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.034616 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4rcp\" (UniqueName: \"kubernetes.io/projected/088af58f-5679-42e6-9595-945ee162f862-kube-api-access-n4rcp\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.035194 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:08.535177608 +0000 UTC m=+118.332300832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.035637 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-utilities\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.035880 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-catalog-content\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.057692 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.068075 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tlxxd" podStartSLOduration=20.068058654 podStartE2EDuration="20.068058654s" podCreationTimestamp="2025-12-08 17:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:08.066087003 +0000 UTC m=+117.863210227" watchObservedRunningTime="2025-12-08 17:44:08.068058654 +0000 UTC m=+117.865181888" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.083004 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdgp8"] Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.084274 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2wct\" (UniqueName: \"kubernetes.io/projected/ab873de1-8a57-4411-a552-1567537bdc67-kube-api-access-m2wct\") pod \"redhat-marketplace-4pqqp\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.089976 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2qkz"] Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.114566 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4rcp\" (UniqueName: \"kubernetes.io/projected/088af58f-5679-42e6-9595-945ee162f862-kube-api-access-n4rcp\") pod \"redhat-operators-7j2rd\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.137471 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.137539 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-utilities\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.137582 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wbccj\" (UniqueName: \"kubernetes.io/projected/7d3964d8-860a-448a-ba5c-309e5343333e-kube-api-access-wbccj\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.137626 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-catalog-content\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.138177 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-catalog-content\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.139392 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-utilities\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.139870 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:08.639848426 +0000 UTC m=+118.436971850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.152032 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5ft89"] Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.170604 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.277289 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.277697 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgss\" (UniqueName: \"kubernetes.io/projected/5bc57600-20de-4fda-ba78-b05d745b08d6-kube-api-access-hdgss\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.277778 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-catalog-content\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.277844 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-utilities\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.277986 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:08.777951604 +0000 UTC m=+118.575074838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.294681 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.294794 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbccj\" (UniqueName: \"kubernetes.io/projected/7d3964d8-860a-448a-ba5c-309e5343333e-kube-api-access-wbccj\") pod \"redhat-marketplace-f2qkz\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.316158 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.383890 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-utilities\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.384414 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdgss\" (UniqueName: \"kubernetes.io/projected/5bc57600-20de-4fda-ba78-b05d745b08d6-kube-api-access-hdgss\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.384471 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-catalog-content\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.384503 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.384857 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:08.88484421 +0000 UTC m=+118.681967444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.385232 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-utilities\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.385755 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-catalog-content\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.454492 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdgss\" (UniqueName: \"kubernetes.io/projected/5bc57600-20de-4fda-ba78-b05d745b08d6-kube-api-access-hdgss\") pod \"redhat-operators-cdgp8\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.485792 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.486565 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:08.986531249 +0000 UTC m=+118.783654483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.486674 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.592758 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.599854 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.099838032 +0000 UTC m=+118.896961266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.651944 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:08 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:08 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:08 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.652530 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.695440 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.695716 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.195701851 +0000 UTC m=+118.992825085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.802392 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.802971 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.302946035 +0000 UTC m=+119.100069269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.904068 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:08 crc kubenswrapper[5116]: E1208 17:44:08.904489 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.404466151 +0000 UTC m=+119.201589405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.965803 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"f9f9b3277234ff39d9dd2ad1c22ed8c27fa4e12c31d7e26d280cb6f95871a714"} Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.978544 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5ft89" event={"ID":"19151390-7d67-4ae9-8520-ae20b8eb46f8","Type":"ContainerStarted","Data":"91e2055b941e1321464106ee8b984813e05d3f7ce077b36f3225957d8e9d4d15"} Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.979645 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"e42e2cdd7631def9e91f3bbf9f2c5dc0c5fc146e40b3b747ae973857c21aba2b"} Dec 08 17:44:08 crc kubenswrapper[5116]: I1208 17:44:08.988619 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"62dec6587b7406232e75eb34c7f39d37527b38e735797d21ed6d771bca2a3f4b"} Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.005307 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.005674 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.505661288 +0000 UTC m=+119.302784522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.017807 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc4fk"] Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.077723 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bnq4b"] Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.140580 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.142169 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.642139685 +0000 UTC m=+119.439262919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.142396 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.143136 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.64312696 +0000 UTC m=+119.440250194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.170791 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.190755 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.193446 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.198059 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.203969 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:44:09 crc kubenswrapper[5116]: W1208 17:44:09.220924 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46bc2c2e_fed2_4cf1_afc1_2fb750553bc5.slice/crio-ed9f93f8c2ede5200d28f602ca02ae5362c701e12611cae4369015a2d367caab WatchSource:0}: Error finding container ed9f93f8c2ede5200d28f602ca02ae5362c701e12611cae4369015a2d367caab: Status 404 returned error can't find the container with id ed9f93f8c2ede5200d28f602ca02ae5362c701e12611cae4369015a2d367caab Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.238637 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvhzm"] Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.243536 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.243736 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.243776 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.243888 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.743865326 +0000 UTC m=+119.540988560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.381363 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.381921 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.381989 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.383144 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:09.883130035 +0000 UTC m=+119.680253259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.383543 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.535705 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.536633 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.036597024 +0000 UTC m=+119.833720268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.548486 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.548989 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tlxxd" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.577659 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.637522 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.639124 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.139107126 +0000 UTC m=+119.936230360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.653998 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:09 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:09 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:09 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.654146 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.676288 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rqg8"] Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.686409 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.742009 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.742944 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.242923441 +0000 UTC m=+120.040046675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.748442 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.789208 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:44:09 crc kubenswrapper[5116]: W1208 17:44:09.791030 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0687a333_2a42_4237_9673_e0210c45dc22.slice/crio-26a25175f5f53eb64e5f23fece348d2f198029597137ba3a2c04f5277b8f9d78 WatchSource:0}: Error finding container 26a25175f5f53eb64e5f23fece348d2f198029597137ba3a2c04f5277b8f9d78: Status 404 returned error can't find the container with id 26a25175f5f53eb64e5f23fece348d2f198029597137ba3a2c04f5277b8f9d78 Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.809089 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-b2n2w" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.849253 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4k28\" (UniqueName: \"kubernetes.io/projected/dc8a8f38-928e-445a-b2d0-56c91cff7483-kube-api-access-h4k28\") pod \"dc8a8f38-928e-445a-b2d0-56c91cff7483\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.849326 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8a8f38-928e-445a-b2d0-56c91cff7483-secret-volume\") pod \"dc8a8f38-928e-445a-b2d0-56c91cff7483\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.849555 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8a8f38-928e-445a-b2d0-56c91cff7483-config-volume\") pod \"dc8a8f38-928e-445a-b2d0-56c91cff7483\" (UID: \"dc8a8f38-928e-445a-b2d0-56c91cff7483\") " Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.849700 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.852512 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.858712 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8a8f38-928e-445a-b2d0-56c91cff7483-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc8a8f38-928e-445a-b2d0-56c91cff7483" (UID: "dc8a8f38-928e-445a-b2d0-56c91cff7483"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.859531 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.35951758 +0000 UTC m=+120.156640814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.906816 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8a8f38-928e-445a-b2d0-56c91cff7483-kube-api-access-h4k28" (OuterVolumeSpecName: "kube-api-access-h4k28") pod "dc8a8f38-928e-445a-b2d0-56c91cff7483" (UID: "dc8a8f38-928e-445a-b2d0-56c91cff7483"). InnerVolumeSpecName "kube-api-access-h4k28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.906821 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8a8f38-928e-445a-b2d0-56c91cff7483-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc8a8f38-928e-445a-b2d0-56c91cff7483" (UID: "dc8a8f38-928e-445a-b2d0-56c91cff7483"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.959431 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.959752 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc8a8f38-928e-445a-b2d0-56c91cff7483-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.959777 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4k28\" (UniqueName: \"kubernetes.io/projected/dc8a8f38-928e-445a-b2d0-56c91cff7483-kube-api-access-h4k28\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:09 crc kubenswrapper[5116]: I1208 17:44:09.959789 5116 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc8a8f38-928e-445a-b2d0-56c91cff7483-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:09 crc kubenswrapper[5116]: E1208 17:44:09.959881 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.459857775 +0000 UTC m=+120.256981009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.110569 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.111605 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.611586708 +0000 UTC m=+120.408709942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.212909 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.213766 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.713743461 +0000 UTC m=+120.510866695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.328031 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.328488 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.828469401 +0000 UTC m=+120.625592635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.330838 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnq4b" event={"ID":"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5","Type":"ContainerStarted","Data":"ed9f93f8c2ede5200d28f602ca02ae5362c701e12611cae4369015a2d367caab"} Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.340403 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"b4b410259c7c8980a717de9709a51b430ea45cbece9290ad9753fae20865afb2"} Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.343121 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerStarted","Data":"26a25175f5f53eb64e5f23fece348d2f198029597137ba3a2c04f5277b8f9d78"} Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.410459 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvhzm" event={"ID":"d7968a24-caaf-4115-992d-3678c03e895a","Type":"ContainerStarted","Data":"49dd49f4ee7254fbe0e0b7b71be5f9f3d4049b346e4f9cc1cb72ca25fae0d548"} Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.419274 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc4fk" event={"ID":"b15bd0e2-4143-436c-8dc2-0fc2e33cef62","Type":"ContainerStarted","Data":"1f6bf72b54abbdbe1d4134eea67af919fe431b560afa36523f0b282b919b99d2"} Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.428980 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.429609 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.929592706 +0000 UTC m=+120.726715940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.429656 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.431298 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:10.931272679 +0000 UTC m=+120.728395913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.441345 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j2rd"] Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.445731 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"80a99bf3ada008b78d17b65ae4c49186b4224eabe39b067c5c8d642d3827cab8"} Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.445810 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.457107 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.457175 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-rrwn6" event={"ID":"dc8a8f38-928e-445a-b2d0-56c91cff7483","Type":"ContainerDied","Data":"5fabfdddb682043d6778d1f166312842d0ac1c778e75c90c0f0b3466f9c1ea43"} Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.457218 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fabfdddb682043d6778d1f166312842d0ac1c778e75c90c0f0b3466f9c1ea43" Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.482196 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pqqp"] Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.516499 5116 patch_prober.go:28] interesting pod/console-64d44f6ddf-l4b2c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.516620 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-l4b2c" podUID="eaf2ae84-8492-41c0-b678-ab302371258a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.521437 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdgp8"] Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.530581 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.531406 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.031383449 +0000 UTC m=+120.828506683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.620708 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2qkz"] Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.645834 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.646673 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.146650012 +0000 UTC m=+120.943773306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.650165 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:10 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:10 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:10 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.650237 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.749342 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.749732 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.249709548 +0000 UTC m=+121.046832772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.877529 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.878137 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.378121064 +0000 UTC m=+121.175244298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: I1208 17:44:10.978351 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:10 crc kubenswrapper[5116]: E1208 17:44:10.978748 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.478727066 +0000 UTC m=+121.275850300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:10 crc kubenswrapper[5116]: W1208 17:44:10.978801 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bc57600_20de_4fda_ba78_b05d745b08d6.slice/crio-f9f64e7e995c3bbb4a27a33cd239473cd09b9b79a4b47d1c2ea3f14ab93d1671 WatchSource:0}: Error finding container f9f64e7e995c3bbb4a27a33cd239473cd09b9b79a4b47d1c2ea3f14ab93d1671: Status 404 returned error can't find the container with id f9f64e7e995c3bbb4a27a33cd239473cd09b9b79a4b47d1c2ea3f14ab93d1671 Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.044924 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.044997 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.084255 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.084707 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.584687197 +0000 UTC m=+121.381810431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.116788 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.249204 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.249330 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.249747 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.249573 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.749553044 +0000 UTC m=+121.546676278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.250541 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.250970 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.75095297 +0000 UTC m=+121.548076204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.360705 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.361100 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.861065629 +0000 UTC m=+121.658188863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.463196 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.463860 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:11.963843008 +0000 UTC m=+121.760966232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.537551 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c8a03c0f-ca27-46b5-91b5-851b4e8526bb","Type":"ContainerStarted","Data":"4ccf4716e499a1beb289b722b593120c88edbe3a1ae14c47460b2688caaf6251"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.561594 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2qkz" event={"ID":"7d3964d8-860a-448a-ba5c-309e5343333e","Type":"ContainerStarted","Data":"c917b2328811530035ef9e3feb724f33def27d3634d935d71ed9324dbbfa9046"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.565887 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.566183 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.066119154 +0000 UTC m=+121.863242388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.572635 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2rd" event={"ID":"088af58f-5679-42e6-9595-945ee162f862","Type":"ContainerStarted","Data":"704b20e063d9691471af2b4538ea9003a88e5c0001fae349c8709b570cb2b51f"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.584290 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pqqp" event={"ID":"ab873de1-8a57-4411-a552-1567537bdc67","Type":"ContainerStarted","Data":"dcabe2db418651711c42829dfbb07467b74149f20e829caf060552b3dd24f516"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.611569 5116 generic.go:358] "Generic (PLEG): container finished" podID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerID="77b634b534403161f71aeeb1268e1a21f205b1da4c6de436cbe8adb4a8468bab" exitCode=0 Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.612228 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnq4b" event={"ID":"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5","Type":"ContainerDied","Data":"77b634b534403161f71aeeb1268e1a21f205b1da4c6de436cbe8adb4a8468bab"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.665353 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:11 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:11 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:11 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.665445 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.668458 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.669075 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.169033165 +0000 UTC m=+121.966156399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.712743 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerStarted","Data":"6e6e759ebee28d8375df5c08ea9996eaeda9345621733989d197eda3ee7bb30a"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.781120 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.783391 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.283367695 +0000 UTC m=+122.080490929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.856716 5116 generic.go:358] "Generic (PLEG): container finished" podID="d7968a24-caaf-4115-992d-3678c03e895a" containerID="8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36" exitCode=0 Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.856903 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvhzm" event={"ID":"d7968a24-caaf-4115-992d-3678c03e895a","Type":"ContainerDied","Data":"8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.861434 5116 generic.go:358] "Generic (PLEG): container finished" podID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerID="f91d1c99f36744b7f717c0250aa3eee51eeee9fd1e7b770de2cfc6a929796e3b" exitCode=0 Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.861519 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc4fk" event={"ID":"b15bd0e2-4143-436c-8dc2-0fc2e33cef62","Type":"ContainerDied","Data":"f91d1c99f36744b7f717c0250aa3eee51eeee9fd1e7b770de2cfc6a929796e3b"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.903747 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdgp8" event={"ID":"5bc57600-20de-4fda-ba78-b05d745b08d6","Type":"ContainerStarted","Data":"f9f64e7e995c3bbb4a27a33cd239473cd09b9b79a4b47d1c2ea3f14ab93d1671"} Dec 08 17:44:11 crc kubenswrapper[5116]: I1208 17:44:11.905314 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:11 crc kubenswrapper[5116]: E1208 17:44:11.906077 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.406062423 +0000 UTC m=+122.203185657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.010728 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.011503 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5ft89" event={"ID":"19151390-7d67-4ae9-8520-ae20b8eb46f8","Type":"ContainerStarted","Data":"e1dd0248ecb8bf3ec4a290c7a3b27cd9218014c48b656db61da9aaa5eb28f45d"} Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.012361 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.512331992 +0000 UTC m=+122.309455326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.015510 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.016788 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.516732147 +0000 UTC m=+122.313855381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.050422 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"7c53fb6ff2f64ebce9a2b969ccd0ed61950d45e77dc99ae11f7b9264d33a509c"} Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.093148 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"10b58457-3a8a-4659-a3ee-cdc62c7194ca","Type":"ContainerStarted","Data":"3574711a6af297458c09e68356787c8ce86039c482a9339dae35a635a4da4ab8"} Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.249959 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.254560 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.754531523 +0000 UTC m=+122.551654757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.359296 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.362464 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.862446306 +0000 UTC m=+122.659569540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.363272 5116 ???:1] "http: TLS handshake error from 192.168.126.11:48406: no serving certificate available for the kubelet" Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.374756 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6s8th" Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.463570 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.465633 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:12.965602143 +0000 UTC m=+122.762725387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.565856 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.566358 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.066331228 +0000 UTC m=+122.863454462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.588771 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-conmon-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bc57600_20de_4fda_ba78_b05d745b08d6.slice/crio-894f71698a4e44732cabc082a8c1db687144093c55d14026e625e6006ab64b2e.scope\": RecentStats: unable to find data in memory cache]" Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.637666 5116 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hv2nc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:44:12 crc kubenswrapper[5116]: [-]has-synced failed: reason withheld Dec 08 17:44:12 crc kubenswrapper[5116]: [+]process-running ok Dec 08 17:44:12 crc kubenswrapper[5116]: healthz check failed Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.637746 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" podUID="0deef197-8a46-46ea-a786-7e9518318396" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.668411 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.669010 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.168981163 +0000 UTC m=+122.966104397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.669151 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.669629 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.16962045 +0000 UTC m=+122.966743694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.770680 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.771090 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.271073504 +0000 UTC m=+123.068196738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.873013 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.873644 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.373615207 +0000 UTC m=+123.170738441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:12 crc kubenswrapper[5116]: I1208 17:44:12.975987 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:12 crc kubenswrapper[5116]: E1208 17:44:12.976520 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.476490827 +0000 UTC m=+123.273614061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.078810 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.079359 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.579339468 +0000 UTC m=+123.376462692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.140016 5116 generic.go:358] "Generic (PLEG): container finished" podID="7d3964d8-860a-448a-ba5c-309e5343333e" containerID="70a963949e4a2721cd854d96d8a57b2a91decac06bd83549306df12fa3372a32" exitCode=0 Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.140140 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2qkz" event={"ID":"7d3964d8-860a-448a-ba5c-309e5343333e","Type":"ContainerDied","Data":"70a963949e4a2721cd854d96d8a57b2a91decac06bd83549306df12fa3372a32"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.143210 5116 generic.go:358] "Generic (PLEG): container finished" podID="088af58f-5679-42e6-9595-945ee162f862" containerID="a55513c692ad2392716a102fd47614098d63521b33abf670fdf908ccf3f4589e" exitCode=0 Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.143297 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2rd" event={"ID":"088af58f-5679-42e6-9595-945ee162f862","Type":"ContainerDied","Data":"a55513c692ad2392716a102fd47614098d63521b33abf670fdf908ccf3f4589e"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.157536 5116 generic.go:358] "Generic (PLEG): container finished" podID="ab873de1-8a57-4411-a552-1567537bdc67" containerID="8b2d943b49c802cc7050d60b7b2e54143ff91f91bd0a0ab0698a920352476dbe" exitCode=0 Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.157725 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pqqp" event={"ID":"ab873de1-8a57-4411-a552-1567537bdc67","Type":"ContainerDied","Data":"8b2d943b49c802cc7050d60b7b2e54143ff91f91bd0a0ab0698a920352476dbe"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.166075 5116 generic.go:358] "Generic (PLEG): container finished" podID="0687a333-2a42-4237-9673-e0210c45dc22" containerID="6e6e759ebee28d8375df5c08ea9996eaeda9345621733989d197eda3ee7bb30a" exitCode=0 Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.166193 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerDied","Data":"6e6e759ebee28d8375df5c08ea9996eaeda9345621733989d197eda3ee7bb30a"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.171120 5116 generic.go:358] "Generic (PLEG): container finished" podID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerID="894f71698a4e44732cabc082a8c1db687144093c55d14026e625e6006ab64b2e" exitCode=0 Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.171207 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdgp8" event={"ID":"5bc57600-20de-4fda-ba78-b05d745b08d6","Type":"ContainerDied","Data":"894f71698a4e44732cabc082a8c1db687144093c55d14026e625e6006ab64b2e"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.173231 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.177787 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.178741 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.180495 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.180612 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.680591786 +0000 UTC m=+123.477715030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.180905 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.181227 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.681217162 +0000 UTC m=+123.478340396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.185795 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5ft89" event={"ID":"19151390-7d67-4ae9-8520-ae20b8eb46f8","Type":"ContainerStarted","Data":"1c9ecae825b8065704dbf78613a9f8e4ca18773d88bd471b383e8595f88a4864"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.192120 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"10b58457-3a8a-4659-a3ee-cdc62c7194ca","Type":"ContainerStarted","Data":"e1d1d75f7c5e62a6b21bbd2897ddb95a22cc451df9f166aa696119864b0a47ad"} Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.279500 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5ft89" podStartSLOduration=102.279424531 podStartE2EDuration="1m42.279424531s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:13.245126058 +0000 UTC m=+123.042249292" watchObservedRunningTime="2025-12-08 17:44:13.279424531 +0000 UTC m=+123.076547765" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.283855 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.285545 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.785526511 +0000 UTC m=+123.582649745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.290949 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=41.290928262 podStartE2EDuration="41.290928262s" podCreationTimestamp="2025-12-08 17:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:13.276704041 +0000 UTC m=+123.073827275" watchObservedRunningTime="2025-12-08 17:44:13.290928262 +0000 UTC m=+123.088051496" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.314189 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=8.314167307 podStartE2EDuration="8.314167307s" podCreationTimestamp="2025-12-08 17:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:13.306636671 +0000 UTC m=+123.103759905" watchObservedRunningTime="2025-12-08 17:44:13.314167307 +0000 UTC m=+123.111290541" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.355157 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.386086 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.386521 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:13.886506372 +0000 UTC m=+123.683629606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.507612 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.507911 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.007889176 +0000 UTC m=+123.805012410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.579535 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.613505 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.614027 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.11400507 +0000 UTC m=+123.911128304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.627556 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.633770 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-hv2nc" Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.723205 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.726166 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.226133403 +0000 UTC m=+124.023256637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.827751 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.828399 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.328378328 +0000 UTC m=+124.125501562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.948383 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.948510 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.448475517 +0000 UTC m=+124.245598751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:13 crc kubenswrapper[5116]: I1208 17:44:13.949054 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:13 crc kubenswrapper[5116]: E1208 17:44:13.957723 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.457692757 +0000 UTC m=+124.254815991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.050381 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.050879 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.550858504 +0000 UTC m=+124.347981738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.152574 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.152899 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.652886434 +0000 UTC m=+124.450009668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.243258 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c8a03c0f-ca27-46b5-91b5-851b4e8526bb","Type":"ContainerStarted","Data":"c450bc5ab22e6512f236eb6a574bc14ed96b71638052cc02a01e790b6009e128"} Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.255998 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.257318 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.757293065 +0000 UTC m=+124.554416299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.359687 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.360789 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.860755991 +0000 UTC m=+124.657879225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.462352 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.462680 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:14.962662417 +0000 UTC m=+124.759785651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.464061 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=5.464034193 podStartE2EDuration="5.464034193s" podCreationTimestamp="2025-12-08 17:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:14.458867788 +0000 UTC m=+124.255991032" watchObservedRunningTime="2025-12-08 17:44:14.464034193 +0000 UTC m=+124.261157427" Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.564258 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.564606 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.064590812 +0000 UTC m=+124.861714046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.665486 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.665743 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.165701068 +0000 UTC m=+124.962824302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.665864 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.666234 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.166226301 +0000 UTC m=+124.963349535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.767853 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.768132 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.268116377 +0000 UTC m=+125.065239611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:14 crc kubenswrapper[5116]: I1208 17:44:14.869872 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:14 crc kubenswrapper[5116]: E1208 17:44:14.870423 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.370403272 +0000 UTC m=+125.167526506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:14.998679 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.094813 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.594757039 +0000 UTC m=+125.391880293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.199891 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.200623 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.700602067 +0000 UTC m=+125.497725301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.262937 5116 generic.go:358] "Generic (PLEG): container finished" podID="c8a03c0f-ca27-46b5-91b5-851b4e8526bb" containerID="c450bc5ab22e6512f236eb6a574bc14ed96b71638052cc02a01e790b6009e128" exitCode=0 Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.263695 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c8a03c0f-ca27-46b5-91b5-851b4e8526bb","Type":"ContainerDied","Data":"c450bc5ab22e6512f236eb6a574bc14ed96b71638052cc02a01e790b6009e128"} Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.359820 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.360137 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.860100504 +0000 UTC m=+125.657223748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.360379 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.361009 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.860983986 +0000 UTC m=+125.658107220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.461706 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.462170 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:15.962152082 +0000 UTC m=+125.759275306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.507475 5116 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.608807 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.609293 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.109278647 +0000 UTC m=+125.906401881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.710092 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.710294 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.210276089 +0000 UTC m=+126.007399323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.710375 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.710719 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.21070737 +0000 UTC m=+126.007830604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.811569 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.812046 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.31202705 +0000 UTC m=+126.109150284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:15 crc kubenswrapper[5116]: I1208 17:44:15.962430 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:15 crc kubenswrapper[5116]: E1208 17:44:15.963080 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.463054666 +0000 UTC m=+126.260177900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.063863 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.064373 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.564356007 +0000 UTC m=+126.361479241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.165123 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.165579 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.665560424 +0000 UTC m=+126.462683658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.266060 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.266228 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.766203607 +0000 UTC m=+126.563326841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.266717 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.267005 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.766995757 +0000 UTC m=+126.564118981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kt94l" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.286477 5116 generic.go:358] "Generic (PLEG): container finished" podID="10b58457-3a8a-4659-a3ee-cdc62c7194ca" containerID="e1d1d75f7c5e62a6b21bbd2897ddb95a22cc451df9f166aa696119864b0a47ad" exitCode=0 Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.286680 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"10b58457-3a8a-4659-a3ee-cdc62c7194ca","Type":"ContainerDied","Data":"e1d1d75f7c5e62a6b21bbd2897ddb95a22cc451df9f166aa696119864b0a47ad"} Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.293954 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" event={"ID":"56b4bc0e-8c13-439e-9293-f70b35418ce0","Type":"ContainerStarted","Data":"622204e355bbbb08313c36ebcbd473a7364e052120403ebf3552553b8ea00385"} Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.294009 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" event={"ID":"56b4bc0e-8c13-439e-9293-f70b35418ce0","Type":"ContainerStarted","Data":"f2548cb27716ecd3f0d23f86d6dbcca762850cf8a6acf9fd65302b6226dbdea7"} Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.368497 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.369086 5116 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:44:16.869066697 +0000 UTC m=+126.666189931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.433259 5116 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T17:44:15.507511125Z","UUID":"4907a085-36bb-4499-9e3f-bd2cf16bd73f","Handler":null,"Name":"","Endpoint":""} Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.439412 5116 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.439481 5116 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.474350 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.480707 5116 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.480752 5116 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.596537 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kt94l\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.674692 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.676347 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.677512 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:16 crc kubenswrapper[5116]: E1208 17:44:16.677556 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.680530 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.698685 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.710750 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.722102 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:44:16 crc kubenswrapper[5116]: I1208 17:44:16.730340 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.012943 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.109332 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kubelet-dir\") pod \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.109568 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kube-api-access\") pod \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\" (UID: \"c8a03c0f-ca27-46b5-91b5-851b4e8526bb\") " Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.121393 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8a03c0f-ca27-46b5-91b5-851b4e8526bb" (UID: "c8a03c0f-ca27-46b5-91b5-851b4e8526bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.139480 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8a03c0f-ca27-46b5-91b5-851b4e8526bb" (UID: "c8a03c0f-ca27-46b5-91b5-851b4e8526bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.159923 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.160011 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8a03c0f-ca27-46b5-91b5-851b4e8526bb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.283218 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-99grc"] Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.290851 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" podUID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" containerName="controller-manager" containerID="cri-o://c65524662fef7c7462a7d3c1a59f7c5258906d845739d0cf58822c812f3cdd99" gracePeriod=30 Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.297600 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd"] Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.297909 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerName="route-controller-manager" containerID="cri-o://6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e" gracePeriod=30 Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.414911 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.417011 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c8a03c0f-ca27-46b5-91b5-851b4e8526bb","Type":"ContainerDied","Data":"4ccf4716e499a1beb289b722b593120c88edbe3a1ae14c47460b2688caaf6251"} Dec 08 17:44:17 crc kubenswrapper[5116]: I1208 17:44:17.417060 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccf4716e499a1beb289b722b593120c88edbe3a1ae14c47460b2688caaf6251" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.067467 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.068006 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kt94l"] Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.242816 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kubelet-dir\") pod \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.243108 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kube-api-access\") pod \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\" (UID: \"10b58457-3a8a-4659-a3ee-cdc62c7194ca\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.242901 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "10b58457-3a8a-4659-a3ee-cdc62c7194ca" (UID: "10b58457-3a8a-4659-a3ee-cdc62c7194ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.320834 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "10b58457-3a8a-4659-a3ee-cdc62c7194ca" (UID: "10b58457-3a8a-4659-a3ee-cdc62c7194ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.344388 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.344430 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b58457-3a8a-4659-a3ee-cdc62c7194ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.346561 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.388710 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894"] Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389657 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10b58457-3a8a-4659-a3ee-cdc62c7194ca" containerName="pruner" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389728 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b58457-3a8a-4659-a3ee-cdc62c7194ca" containerName="pruner" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389755 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8a03c0f-ca27-46b5-91b5-851b4e8526bb" containerName="pruner" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389763 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a03c0f-ca27-46b5-91b5-851b4e8526bb" containerName="pruner" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389812 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc8a8f38-928e-445a-b2d0-56c91cff7483" containerName="collect-profiles" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389821 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8a8f38-928e-445a-b2d0-56c91cff7483" containerName="collect-profiles" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389838 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerName="route-controller-manager" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.389883 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerName="route-controller-manager" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.390014 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerName="route-controller-manager" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.390077 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="dc8a8f38-928e-445a-b2d0-56c91cff7483" containerName="collect-profiles" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.390087 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="c8a03c0f-ca27-46b5-91b5-851b4e8526bb" containerName="pruner" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.390106 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="10b58457-3a8a-4659-a3ee-cdc62c7194ca" containerName="pruner" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.404740 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.411235 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894"] Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.449561 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" event={"ID":"56b4bc0e-8c13-439e-9293-f70b35418ce0","Type":"ContainerStarted","Data":"2431b7869279ed6c02f6a1c434223af8e36b4ad03ad8d737fee81699a6f6b45c"} Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.532613 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.532696 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"10b58457-3a8a-4659-a3ee-cdc62c7194ca","Type":"ContainerDied","Data":"3574711a6af297458c09e68356787c8ce86039c482a9339dae35a635a4da4ab8"} Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.532746 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3574711a6af297458c09e68356787c8ce86039c482a9339dae35a635a4da4ab8" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.535786 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" event={"ID":"83ea28a4-865d-4cee-aaa2-7adcccfba4a2","Type":"ContainerStarted","Data":"73d3b452b364e7f22c40a473ee8cad121c9a0ed3d536051a00c04368cd6e6b80"} Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.541336 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kdrb8" podStartSLOduration=30.541311466 podStartE2EDuration="30.541311466s" podCreationTimestamp="2025-12-08 17:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:18.540934075 +0000 UTC m=+128.338057329" watchObservedRunningTime="2025-12-08 17:44:18.541311466 +0000 UTC m=+128.338434700" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.546983 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhpkw\" (UniqueName: \"kubernetes.io/projected/a4183c4d-f709-4d5b-a9a4-180284f37cc8-kube-api-access-bhpkw\") pod \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.547051 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-client-ca\") pod \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.547115 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a4183c4d-f709-4d5b-a9a4-180284f37cc8-tmp\") pod \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.547316 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-config\") pod \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.547387 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4183c4d-f709-4d5b-a9a4-180284f37cc8-serving-cert\") pod \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\" (UID: \"a4183c4d-f709-4d5b-a9a4-180284f37cc8\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.547719 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-config\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.548031 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20cb252f-d1e2-47a3-8655-c85d0ba4378e-serving-cert\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.548110 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20cb252f-d1e2-47a3-8655-c85d0ba4378e-tmp\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.548156 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-client-ca\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.548273 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlm9\" (UniqueName: \"kubernetes.io/projected/20cb252f-d1e2-47a3-8655-c85d0ba4378e-kube-api-access-qdlm9\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.548581 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4183c4d-f709-4d5b-a9a4-180284f37cc8-tmp" (OuterVolumeSpecName: "tmp") pod "a4183c4d-f709-4d5b-a9a4-180284f37cc8" (UID: "a4183c4d-f709-4d5b-a9a4-180284f37cc8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.548794 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4183c4d-f709-4d5b-a9a4-180284f37cc8" (UID: "a4183c4d-f709-4d5b-a9a4-180284f37cc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.549574 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-config" (OuterVolumeSpecName: "config") pod "a4183c4d-f709-4d5b-a9a4-180284f37cc8" (UID: "a4183c4d-f709-4d5b-a9a4-180284f37cc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.550684 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.550766 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a4183c4d-f709-4d5b-a9a4-180284f37cc8-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.555661 5116 generic.go:358] "Generic (PLEG): container finished" podID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" containerID="c65524662fef7c7462a7d3c1a59f7c5258906d845739d0cf58822c812f3cdd99" exitCode=0 Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.555766 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" event={"ID":"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e","Type":"ContainerDied","Data":"c65524662fef7c7462a7d3c1a59f7c5258906d845739d0cf58822c812f3cdd99"} Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.556157 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4183c4d-f709-4d5b-a9a4-180284f37cc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4183c4d-f709-4d5b-a9a4-180284f37cc8" (UID: "a4183c4d-f709-4d5b-a9a4-180284f37cc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.560276 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4183c4d-f709-4d5b-a9a4-180284f37cc8-kube-api-access-bhpkw" (OuterVolumeSpecName: "kube-api-access-bhpkw") pod "a4183c4d-f709-4d5b-a9a4-180284f37cc8" (UID: "a4183c4d-f709-4d5b-a9a4-180284f37cc8"). InnerVolumeSpecName "kube-api-access-bhpkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.570770 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.575149 5116 generic.go:358] "Generic (PLEG): container finished" podID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" containerID="6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e" exitCode=0 Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.575364 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" event={"ID":"a4183c4d-f709-4d5b-a9a4-180284f37cc8","Type":"ContainerDied","Data":"6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e"} Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.575461 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.575555 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd" event={"ID":"a4183c4d-f709-4d5b-a9a4-180284f37cc8","Type":"ContainerDied","Data":"fdff9e942f11df2937a23ca0423cb47a0ca2bba7fb62d681e6193f708bd3fb64"} Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.575600 5116 scope.go:117] "RemoveContainer" containerID="6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.617391 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm"] Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.618598 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" containerName="controller-manager" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.619062 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" containerName="controller-manager" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.619402 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" containerName="controller-manager" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.630920 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd"] Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.631140 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.638173 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-mv9qd"] Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.644374 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm"] Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.657128 5116 scope.go:117] "RemoveContainer" containerID="6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.658157 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-client-ca\") pod \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.658296 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-proxy-ca-bundles\") pod \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.658342 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-config\") pod \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.658386 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-tmp\") pod \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.658437 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5zlg\" (UniqueName: \"kubernetes.io/projected/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-kube-api-access-f5zlg\") pod \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.658499 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-serving-cert\") pod \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\" (UID: \"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e\") " Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659062 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a38bd9-d4e6-4f81-802e-9be60cfff94e-serving-cert\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659120 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-config\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659168 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5l5\" (UniqueName: \"kubernetes.io/projected/81a38bd9-d4e6-4f81-802e-9be60cfff94e-kube-api-access-gx5l5\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659277 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-config\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659319 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20cb252f-d1e2-47a3-8655-c85d0ba4378e-serving-cert\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659349 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20cb252f-d1e2-47a3-8655-c85d0ba4378e-tmp\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659394 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-client-ca\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659450 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81a38bd9-d4e6-4f81-802e-9be60cfff94e-tmp\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659492 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdlm9\" (UniqueName: \"kubernetes.io/projected/20cb252f-d1e2-47a3-8655-c85d0ba4378e-kube-api-access-qdlm9\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659525 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-client-ca\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659561 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-proxy-ca-bundles\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659709 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bhpkw\" (UniqueName: \"kubernetes.io/projected/a4183c4d-f709-4d5b-a9a4-180284f37cc8-kube-api-access-bhpkw\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659736 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4183c4d-f709-4d5b-a9a4-180284f37cc8-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659753 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4183c4d-f709-4d5b-a9a4-180284f37cc8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.659822 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-config" (OuterVolumeSpecName: "config") pod "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" (UID: "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.660319 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-client-ca" (OuterVolumeSpecName: "client-ca") pod "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" (UID: "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.660725 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-tmp" (OuterVolumeSpecName: "tmp") pod "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" (UID: "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.660806 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" (UID: "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.662069 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20cb252f-d1e2-47a3-8655-c85d0ba4378e-tmp\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.663306 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-client-ca\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.664576 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-kube-api-access-f5zlg" (OuterVolumeSpecName: "kube-api-access-f5zlg") pod "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" (UID: "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e"). InnerVolumeSpecName "kube-api-access-f5zlg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.675143 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20cb252f-d1e2-47a3-8655-c85d0ba4378e-serving-cert\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.684072 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-config\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.690539 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" (UID: "4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.702715 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdlm9\" (UniqueName: \"kubernetes.io/projected/20cb252f-d1e2-47a3-8655-c85d0ba4378e-kube-api-access-qdlm9\") pod \"route-controller-manager-58fcd699cc-88894\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: E1208 17:44:18.703779 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e\": container with ID starting with 6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e not found: ID does not exist" containerID="6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.703872 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e"} err="failed to get container status \"6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e\": rpc error: code = NotFound desc = could not find container \"6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e\": container with ID starting with 6049bd486993d0a63a6917836bcecf25f0bb36e43981d86a1dc0d3f5c88c834e not found: ID does not exist" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.718706 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4183c4d-f709-4d5b-a9a4-180284f37cc8" path="/var/lib/kubelet/pods/a4183c4d-f709-4d5b-a9a4-180284f37cc8/volumes" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.732968 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.761977 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-config\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762124 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81a38bd9-d4e6-4f81-802e-9be60cfff94e-tmp\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762170 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-client-ca\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762199 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-proxy-ca-bundles\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762279 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a38bd9-d4e6-4f81-802e-9be60cfff94e-serving-cert\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762316 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5l5\" (UniqueName: \"kubernetes.io/projected/81a38bd9-d4e6-4f81-802e-9be60cfff94e-kube-api-access-gx5l5\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762388 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762403 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762415 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762428 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762437 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.762446 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f5zlg\" (UniqueName: \"kubernetes.io/projected/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e-kube-api-access-f5zlg\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.764413 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-config\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.765854 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-client-ca\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.767267 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-proxy-ca-bundles\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.768016 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81a38bd9-d4e6-4f81-802e-9be60cfff94e-tmp\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.771796 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a38bd9-d4e6-4f81-802e-9be60cfff94e-serving-cert\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:18 crc kubenswrapper[5116]: I1208 17:44:18.892034 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5l5\" (UniqueName: \"kubernetes.io/projected/81a38bd9-d4e6-4f81-802e-9be60cfff94e-kube-api-access-gx5l5\") pod \"controller-manager-78fb99b7f7-d4qxm\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.053166 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.595122 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" event={"ID":"83ea28a4-865d-4cee-aaa2-7adcccfba4a2","Type":"ContainerStarted","Data":"c2b3625d21c4386406d248c280fd04fea16bd3f16d82a3d1e8526bad45d5bb71"} Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.596512 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.601358 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.601463 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-99grc" event={"ID":"4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e","Type":"ContainerDied","Data":"461e3685cebf09ffd50ea640ce50fd172f8f5b1f055cdad856f359cb7f4bb7e2"} Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.601535 5116 scope.go:117] "RemoveContainer" containerID="c65524662fef7c7462a7d3c1a59f7c5258906d845739d0cf58822c812f3cdd99" Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.623393 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" podStartSLOduration=108.623375083 podStartE2EDuration="1m48.623375083s" podCreationTimestamp="2025-12-08 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:19.61822707 +0000 UTC m=+129.415350304" watchObservedRunningTime="2025-12-08 17:44:19.623375083 +0000 UTC m=+129.420498317" Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.633220 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894"] Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.641199 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-99grc"] Dec 08 17:44:19 crc kubenswrapper[5116]: I1208 17:44:19.643815 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-99grc"] Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.079738 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm"] Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.512187 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.518626 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-l4b2c" Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.639743 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" event={"ID":"20cb252f-d1e2-47a3-8655-c85d0ba4378e","Type":"ContainerStarted","Data":"67c8f688b9230269173a342e51db953c47727d74ba8387e1ff10796b6174577b"} Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.697125 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e" path="/var/lib/kubelet/pods/4fd57cd5-0d45-4b70-87a2-0c73d9c9b61e/volumes" Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.980790 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.980898 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.980971 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.981769 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"41d5d16f8c49b62e4ed97c27dad7f98ffd3b757b2e08de3d43ba27ba87f20c52"} pod="openshift-console/downloads-747b44746d-4msk8" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.981818 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" containerID="cri-o://41d5d16f8c49b62e4ed97c27dad7f98ffd3b757b2e08de3d43ba27ba87f20c52" gracePeriod=2 Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.981999 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:20 crc kubenswrapper[5116]: I1208 17:44:20.982116 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:21 crc kubenswrapper[5116]: I1208 17:44:21.673397 5116 generic.go:358] "Generic (PLEG): container finished" podID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerID="41d5d16f8c49b62e4ed97c27dad7f98ffd3b757b2e08de3d43ba27ba87f20c52" exitCode=0 Dec 08 17:44:21 crc kubenswrapper[5116]: I1208 17:44:21.673481 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-4msk8" event={"ID":"e71c8014-5266-4483-8037-e8d9e7995c1b","Type":"ContainerDied","Data":"41d5d16f8c49b62e4ed97c27dad7f98ffd3b757b2e08de3d43ba27ba87f20c52"} Dec 08 17:44:22 crc kubenswrapper[5116]: I1208 17:44:22.639688 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60454: no serving certificate available for the kubelet" Dec 08 17:44:22 crc kubenswrapper[5116]: E1208 17:44:22.773813 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-conmon-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache]" Dec 08 17:44:24 crc kubenswrapper[5116]: I1208 17:44:24.255747 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:26 crc kubenswrapper[5116]: E1208 17:44:26.673712 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:26 crc kubenswrapper[5116]: E1208 17:44:26.676622 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:26 crc kubenswrapper[5116]: E1208 17:44:26.678389 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:26 crc kubenswrapper[5116]: E1208 17:44:26.678536 5116 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:44:30 crc kubenswrapper[5116]: I1208 17:44:30.982312 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:30 crc kubenswrapper[5116]: I1208 17:44:30.983291 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:31 crc kubenswrapper[5116]: I1208 17:44:31.955719 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qsgps_d73c8661-d51c-4d6e-a981-e186a3fc1964/kube-multus-additional-cni-plugins/0.log" Dec 08 17:44:31 crc kubenswrapper[5116]: I1208 17:44:31.955863 5116 generic.go:358] "Generic (PLEG): container finished" podID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" exitCode=137 Dec 08 17:44:31 crc kubenswrapper[5116]: I1208 17:44:31.956140 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" event={"ID":"d73c8661-d51c-4d6e-a981-e186a3fc1964","Type":"ContainerDied","Data":"7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985"} Dec 08 17:44:32 crc kubenswrapper[5116]: E1208 17:44:32.929555 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-conmon-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache]" Dec 08 17:44:32 crc kubenswrapper[5116]: I1208 17:44:32.973436 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" event={"ID":"81a38bd9-d4e6-4f81-802e-9be60cfff94e","Type":"ContainerStarted","Data":"7eba3d04e24d538799b674b28062445ad3d76dc881e2c505dc6695806aaaa73b"} Dec 08 17:44:33 crc kubenswrapper[5116]: I1208 17:44:33.346525 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hzpcx" Dec 08 17:44:36 crc kubenswrapper[5116]: E1208 17:44:36.731775 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985 is running failed: container process not found" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:36 crc kubenswrapper[5116]: E1208 17:44:36.732670 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985 is running failed: container process not found" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:36 crc kubenswrapper[5116]: E1208 17:44:36.733233 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985 is running failed: container process not found" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:44:36 crc kubenswrapper[5116]: E1208 17:44:36.733327 5116 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:44:37 crc kubenswrapper[5116]: I1208 17:44:37.019589 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm"] Dec 08 17:44:37 crc kubenswrapper[5116]: I1208 17:44:37.055134 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894"] Dec 08 17:44:40 crc kubenswrapper[5116]: I1208 17:44:40.983106 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:40 crc kubenswrapper[5116]: I1208 17:44:40.983776 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.157128 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qsgps_d73c8661-d51c-4d6e-a981-e186a3fc1964/kube-multus-additional-cni-plugins/0.log" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.157707 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.364954 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d73c8661-d51c-4d6e-a981-e186a3fc1964-cni-sysctl-allowlist\") pod \"d73c8661-d51c-4d6e-a981-e186a3fc1964\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.365071 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d73c8661-d51c-4d6e-a981-e186a3fc1964-ready\") pod \"d73c8661-d51c-4d6e-a981-e186a3fc1964\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.365099 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73c8661-d51c-4d6e-a981-e186a3fc1964-tuning-conf-dir\") pod \"d73c8661-d51c-4d6e-a981-e186a3fc1964\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.365151 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8hkm\" (UniqueName: \"kubernetes.io/projected/d73c8661-d51c-4d6e-a981-e186a3fc1964-kube-api-access-d8hkm\") pod \"d73c8661-d51c-4d6e-a981-e186a3fc1964\" (UID: \"d73c8661-d51c-4d6e-a981-e186a3fc1964\") " Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.365613 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d73c8661-d51c-4d6e-a981-e186a3fc1964-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "d73c8661-d51c-4d6e-a981-e186a3fc1964" (UID: "d73c8661-d51c-4d6e-a981-e186a3fc1964"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.366719 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73c8661-d51c-4d6e-a981-e186a3fc1964-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "d73c8661-d51c-4d6e-a981-e186a3fc1964" (UID: "d73c8661-d51c-4d6e-a981-e186a3fc1964"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.368586 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d73c8661-d51c-4d6e-a981-e186a3fc1964-ready" (OuterVolumeSpecName: "ready") pod "d73c8661-d51c-4d6e-a981-e186a3fc1964" (UID: "d73c8661-d51c-4d6e-a981-e186a3fc1964"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.397261 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73c8661-d51c-4d6e-a981-e186a3fc1964-kube-api-access-d8hkm" (OuterVolumeSpecName: "kube-api-access-d8hkm") pod "d73c8661-d51c-4d6e-a981-e186a3fc1964" (UID: "d73c8661-d51c-4d6e-a981-e186a3fc1964"). InnerVolumeSpecName "kube-api-access-d8hkm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.510818 5116 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d73c8661-d51c-4d6e-a981-e186a3fc1964-ready\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.510861 5116 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73c8661-d51c-4d6e-a981-e186a3fc1964-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.510876 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d8hkm\" (UniqueName: \"kubernetes.io/projected/d73c8661-d51c-4d6e-a981-e186a3fc1964-kube-api-access-d8hkm\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.510888 5116 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d73c8661-d51c-4d6e-a981-e186a3fc1964-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.699969 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.701216 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.701263 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.701402 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" containerName="kube-multus-additional-cni-plugins" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.707513 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.710454 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.902216 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.904099 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.922871 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/536aba6a-5306-4082-a15f-1cedcb79625b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.922997 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/536aba6a-5306-4082-a15f-1cedcb79625b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:41 crc kubenswrapper[5116]: I1208 17:44:41.937415 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.134432 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/536aba6a-5306-4082-a15f-1cedcb79625b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.135139 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/536aba6a-5306-4082-a15f-1cedcb79625b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.135370 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/536aba6a-5306-4082-a15f-1cedcb79625b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.167353 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" event={"ID":"20cb252f-d1e2-47a3-8655-c85d0ba4378e","Type":"ContainerStarted","Data":"5f5d27d585f3b24c468fb5a08140f0cc3ae2343255ce80ab73146929f20d61fa"} Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.167407 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" podUID="20cb252f-d1e2-47a3-8655-c85d0ba4378e" containerName="route-controller-manager" containerID="cri-o://5f5d27d585f3b24c468fb5a08140f0cc3ae2343255ce80ab73146929f20d61fa" gracePeriod=30 Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.168341 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.184806 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qsgps_d73c8661-d51c-4d6e-a981-e186a3fc1964/kube-multus-additional-cni-plugins/0.log" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.184970 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" event={"ID":"d73c8661-d51c-4d6e-a981-e186a3fc1964","Type":"ContainerDied","Data":"253ca5314a7879fb108fb86e20db89f6820fff3ec17fb312830683327ef57eec"} Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.185057 5116 scope.go:117] "RemoveContainer" containerID="7478cafb951735df5624cf4c67a2f9fe7cecc5f7456b3ba6f18b26ee5f91e985" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.185225 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qsgps" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.197178 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/536aba6a-5306-4082-a15f-1cedcb79625b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.203860 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" podStartSLOduration=25.203831169 podStartE2EDuration="25.203831169s" podCreationTimestamp="2025-12-08 17:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:42.196764161 +0000 UTC m=+151.993887405" watchObservedRunningTime="2025-12-08 17:44:42.203831169 +0000 UTC m=+152.000954403" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.225418 5116 patch_prober.go:28] interesting pod/route-controller-manager-58fcd699cc-88894 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:45808->10.217.0.54:8443: read: connection reset by peer" start-of-body= Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.225526 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" podUID="20cb252f-d1e2-47a3-8655-c85d0ba4378e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:45808->10.217.0.54:8443: read: connection reset by peer" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.231811 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" podUID="81a38bd9-d4e6-4f81-802e-9be60cfff94e" containerName="controller-manager" containerID="cri-o://cca6018c2d852a7073eebb6ac7153f8e5ae717084342cdcabd2e78e9b08e81c5" gracePeriod=30 Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.374462 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.374504 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" event={"ID":"81a38bd9-d4e6-4f81-802e-9be60cfff94e","Type":"ContainerStarted","Data":"cca6018c2d852a7073eebb6ac7153f8e5ae717084342cdcabd2e78e9b08e81c5"} Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.413970 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-4msk8" event={"ID":"e71c8014-5266-4483-8037-e8d9e7995c1b","Type":"ContainerStarted","Data":"8879ad31a0331953d4556bde0ad889da7d3248e3f7934b3b6e80b612c6e85e5d"} Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.415117 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.415352 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.415481 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.440417 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" podStartSLOduration=25.440392338 podStartE2EDuration="25.440392338s" podCreationTimestamp="2025-12-08 17:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:42.410916321 +0000 UTC m=+152.208039565" watchObservedRunningTime="2025-12-08 17:44:42.440392338 +0000 UTC m=+152.237515572" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.485034 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.512348 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qsgps"] Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.519445 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qsgps"] Dec 08 17:44:42 crc kubenswrapper[5116]: I1208 17:44:42.728268 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d73c8661-d51c-4d6e-a981-e186a3fc1964" path="/var/lib/kubelet/pods/d73c8661-d51c-4d6e-a981-e186a3fc1964/volumes" Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.192947 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42776: no serving certificate available for the kubelet" Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.199119 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.232345 5116 patch_prober.go:28] interesting pod/controller-manager-78fb99b7f7-d4qxm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.232523 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" podUID="81a38bd9-d4e6-4f81-802e-9be60cfff94e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.233732 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.537269 5116 generic.go:358] "Generic (PLEG): container finished" podID="20cb252f-d1e2-47a3-8655-c85d0ba4378e" containerID="5f5d27d585f3b24c468fb5a08140f0cc3ae2343255ce80ab73146929f20d61fa" exitCode=0 Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.537509 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" event={"ID":"20cb252f-d1e2-47a3-8655-c85d0ba4378e","Type":"ContainerDied","Data":"5f5d27d585f3b24c468fb5a08140f0cc3ae2343255ce80ab73146929f20d61fa"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.538007 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" event={"ID":"20cb252f-d1e2-47a3-8655-c85d0ba4378e","Type":"ContainerDied","Data":"67c8f688b9230269173a342e51db953c47727d74ba8387e1ff10796b6174577b"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.538072 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67c8f688b9230269173a342e51db953c47727d74ba8387e1ff10796b6174577b" Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.558224 5116 generic.go:358] "Generic (PLEG): container finished" podID="7d3964d8-860a-448a-ba5c-309e5343333e" containerID="045cb1725f3317df0938aa5154239ba1a7815130327cbee579eed677125a8080" exitCode=0 Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.558327 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2qkz" event={"ID":"7d3964d8-860a-448a-ba5c-309e5343333e","Type":"ContainerDied","Data":"045cb1725f3317df0938aa5154239ba1a7815130327cbee579eed677125a8080"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.562093 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2rd" event={"ID":"088af58f-5679-42e6-9595-945ee162f862","Type":"ContainerStarted","Data":"7d5bf1c127c46e8a507bd1cd59bee2653a692d969941694d3226047952bca532"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.573699 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pqqp" event={"ID":"ab873de1-8a57-4411-a552-1567537bdc67","Type":"ContainerStarted","Data":"bd22c53a9ce9fad43fbafe56982ecbce82d2148ce1afc9ce88b2b7d772ef52e0"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.611445 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerStarted","Data":"346b15c51c9a64df05e190a7a1842fdb528612e1a7c1f65abdcc8e4a14c2ca8a"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.642779 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"536aba6a-5306-4082-a15f-1cedcb79625b","Type":"ContainerStarted","Data":"15606af95be2bff65eecfaeb6c8ce4a348722f59b3ccb62aff6776dc2ba8590a"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.650289 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvhzm" event={"ID":"d7968a24-caaf-4115-992d-3678c03e895a","Type":"ContainerStarted","Data":"2b4ce0c83cac6feef34082b5ad23675bdbe8b223d309d648f87e98cc4eed460b"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.668749 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc4fk" event={"ID":"b15bd0e2-4143-436c-8dc2-0fc2e33cef62","Type":"ContainerStarted","Data":"547e19fbcdf0661c69c1a969bea40e171a346f5c072bb931aa3fbd809cba12d0"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.879936 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdgp8" event={"ID":"5bc57600-20de-4fda-ba78-b05d745b08d6","Type":"ContainerStarted","Data":"bd5c78f9040b7dfef67c8489dee5cf56092c08b5e6aa74bad580a022e6420962"} Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.880689 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:43 crc kubenswrapper[5116]: I1208 17:44:43.880809 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:44 crc kubenswrapper[5116]: E1208 17:44:44.335308 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-conmon-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache]" Dec 08 17:44:44 crc kubenswrapper[5116]: I1208 17:44:44.946598 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-78fb99b7f7-d4qxm_81a38bd9-d4e6-4f81-802e-9be60cfff94e/controller-manager/0.log" Dec 08 17:44:44 crc kubenswrapper[5116]: I1208 17:44:44.947005 5116 generic.go:358] "Generic (PLEG): container finished" podID="81a38bd9-d4e6-4f81-802e-9be60cfff94e" containerID="cca6018c2d852a7073eebb6ac7153f8e5ae717084342cdcabd2e78e9b08e81c5" exitCode=255 Dec 08 17:44:44 crc kubenswrapper[5116]: I1208 17:44:44.947112 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" event={"ID":"81a38bd9-d4e6-4f81-802e-9be60cfff94e","Type":"ContainerDied","Data":"cca6018c2d852a7073eebb6ac7153f8e5ae717084342cdcabd2e78e9b08e81c5"} Dec 08 17:44:44 crc kubenswrapper[5116]: I1208 17:44:44.950235 5116 generic.go:358] "Generic (PLEG): container finished" podID="ab873de1-8a57-4411-a552-1567537bdc67" containerID="bd22c53a9ce9fad43fbafe56982ecbce82d2148ce1afc9ce88b2b7d772ef52e0" exitCode=0 Dec 08 17:44:44 crc kubenswrapper[5116]: I1208 17:44:44.953397 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pqqp" event={"ID":"ab873de1-8a57-4411-a552-1567537bdc67","Type":"ContainerDied","Data":"bd22c53a9ce9fad43fbafe56982ecbce82d2148ce1afc9ce88b2b7d772ef52e0"} Dec 08 17:44:45 crc kubenswrapper[5116]: I1208 17:44:45.987049 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnq4b" event={"ID":"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5","Type":"ContainerStarted","Data":"b252ad98e404697abd089451f00b97b67a8626fe90380d36b5d7f40ffcc146b9"} Dec 08 17:44:45 crc kubenswrapper[5116]: I1208 17:44:45.989386 5116 generic.go:358] "Generic (PLEG): container finished" podID="0687a333-2a42-4237-9673-e0210c45dc22" containerID="346b15c51c9a64df05e190a7a1842fdb528612e1a7c1f65abdcc8e4a14c2ca8a" exitCode=0 Dec 08 17:44:45 crc kubenswrapper[5116]: I1208 17:44:45.989516 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerDied","Data":"346b15c51c9a64df05e190a7a1842fdb528612e1a7c1f65abdcc8e4a14c2ca8a"} Dec 08 17:44:45 crc kubenswrapper[5116]: I1208 17:44:45.991967 5116 generic.go:358] "Generic (PLEG): container finished" podID="d7968a24-caaf-4115-992d-3678c03e895a" containerID="2b4ce0c83cac6feef34082b5ad23675bdbe8b223d309d648f87e98cc4eed460b" exitCode=0 Dec 08 17:44:45 crc kubenswrapper[5116]: I1208 17:44:45.992021 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvhzm" event={"ID":"d7968a24-caaf-4115-992d-3678c03e895a","Type":"ContainerDied","Data":"2b4ce0c83cac6feef34082b5ad23675bdbe8b223d309d648f87e98cc4eed460b"} Dec 08 17:44:46 crc kubenswrapper[5116]: I1208 17:44:45.999090 5116 generic.go:358] "Generic (PLEG): container finished" podID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerID="547e19fbcdf0661c69c1a969bea40e171a346f5c072bb931aa3fbd809cba12d0" exitCode=0 Dec 08 17:44:46 crc kubenswrapper[5116]: I1208 17:44:45.999298 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc4fk" event={"ID":"b15bd0e2-4143-436c-8dc2-0fc2e33cef62","Type":"ContainerDied","Data":"547e19fbcdf0661c69c1a969bea40e171a346f5c072bb931aa3fbd809cba12d0"} Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.017899 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-78fb99b7f7-d4qxm_81a38bd9-d4e6-4f81-802e-9be60cfff94e/controller-manager/0.log" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.019536 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" event={"ID":"81a38bd9-d4e6-4f81-802e-9be60cfff94e","Type":"ContainerDied","Data":"7eba3d04e24d538799b674b28062445ad3d76dc881e2c505dc6695806aaaa73b"} Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.019588 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eba3d04e24d538799b674b28062445ad3d76dc881e2c505dc6695806aaaa73b" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.260787 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.467179 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq"] Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.467881 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20cb252f-d1e2-47a3-8655-c85d0ba4378e" containerName="route-controller-manager" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.467896 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cb252f-d1e2-47a3-8655-c85d0ba4378e" containerName="route-controller-manager" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.468029 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="20cb252f-d1e2-47a3-8655-c85d0ba4378e" containerName="route-controller-manager" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.478652 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.482470 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-78fb99b7f7-d4qxm_81a38bd9-d4e6-4f81-802e-9be60cfff94e/controller-manager/0.log" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.482569 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.548958 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq"] Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.570556 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-client-ca\") pod \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.570613 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-config\") pod \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.570636 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdlm9\" (UniqueName: \"kubernetes.io/projected/20cb252f-d1e2-47a3-8655-c85d0ba4378e-kube-api-access-qdlm9\") pod \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.570747 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20cb252f-d1e2-47a3-8655-c85d0ba4378e-tmp\") pod \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.570784 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20cb252f-d1e2-47a3-8655-c85d0ba4378e-serving-cert\") pod \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\" (UID: \"20cb252f-d1e2-47a3-8655-c85d0ba4378e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.578928 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cb252f-d1e2-47a3-8655-c85d0ba4378e-tmp" (OuterVolumeSpecName: "tmp") pod "20cb252f-d1e2-47a3-8655-c85d0ba4378e" (UID: "20cb252f-d1e2-47a3-8655-c85d0ba4378e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.579022 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-client-ca" (OuterVolumeSpecName: "client-ca") pod "20cb252f-d1e2-47a3-8655-c85d0ba4378e" (UID: "20cb252f-d1e2-47a3-8655-c85d0ba4378e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.579323 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-config" (OuterVolumeSpecName: "config") pod "20cb252f-d1e2-47a3-8655-c85d0ba4378e" (UID: "20cb252f-d1e2-47a3-8655-c85d0ba4378e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.609704 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20cb252f-d1e2-47a3-8655-c85d0ba4378e-kube-api-access-qdlm9" (OuterVolumeSpecName: "kube-api-access-qdlm9") pod "20cb252f-d1e2-47a3-8655-c85d0ba4378e" (UID: "20cb252f-d1e2-47a3-8655-c85d0ba4378e"). InnerVolumeSpecName "kube-api-access-qdlm9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.630169 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20cb252f-d1e2-47a3-8655-c85d0ba4378e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20cb252f-d1e2-47a3-8655-c85d0ba4378e" (UID: "20cb252f-d1e2-47a3-8655-c85d0ba4378e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.643623 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f64f4648c-tg69j"] Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.651878 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81a38bd9-d4e6-4f81-802e-9be60cfff94e" containerName="controller-manager" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.651936 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a38bd9-d4e6-4f81-802e-9be60cfff94e" containerName="controller-manager" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.652219 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="81a38bd9-d4e6-4f81-802e-9be60cfff94e" containerName="controller-manager" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.658786 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f64f4648c-tg69j"] Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.659053 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.673487 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-client-ca\") pod \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.673673 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-proxy-ca-bundles\") pod \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.673765 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx5l5\" (UniqueName: \"kubernetes.io/projected/81a38bd9-d4e6-4f81-802e-9be60cfff94e-kube-api-access-gx5l5\") pod \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.673831 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81a38bd9-d4e6-4f81-802e-9be60cfff94e-tmp\") pod \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.673896 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a38bd9-d4e6-4f81-802e-9be60cfff94e-serving-cert\") pod \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.673918 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-config\") pod \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\" (UID: \"81a38bd9-d4e6-4f81-802e-9be60cfff94e\") " Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674219 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw82g\" (UniqueName: \"kubernetes.io/projected/afdd6854-7534-47d5-86ac-08648aec89c3-kube-api-access-vw82g\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674311 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afdd6854-7534-47d5-86ac-08648aec89c3-serving-cert\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674368 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-config\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674387 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afdd6854-7534-47d5-86ac-08648aec89c3-tmp\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674445 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-client-ca\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674527 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20cb252f-d1e2-47a3-8655-c85d0ba4378e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674543 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20cb252f-d1e2-47a3-8655-c85d0ba4378e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674554 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674564 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cb252f-d1e2-47a3-8655-c85d0ba4378e-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.674574 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdlm9\" (UniqueName: \"kubernetes.io/projected/20cb252f-d1e2-47a3-8655-c85d0ba4378e-kube-api-access-qdlm9\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.676286 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-client-ca" (OuterVolumeSpecName: "client-ca") pod "81a38bd9-d4e6-4f81-802e-9be60cfff94e" (UID: "81a38bd9-d4e6-4f81-802e-9be60cfff94e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.676359 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "81a38bd9-d4e6-4f81-802e-9be60cfff94e" (UID: "81a38bd9-d4e6-4f81-802e-9be60cfff94e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.677696 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81a38bd9-d4e6-4f81-802e-9be60cfff94e-tmp" (OuterVolumeSpecName: "tmp") pod "81a38bd9-d4e6-4f81-802e-9be60cfff94e" (UID: "81a38bd9-d4e6-4f81-802e-9be60cfff94e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.677953 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-config" (OuterVolumeSpecName: "config") pod "81a38bd9-d4e6-4f81-802e-9be60cfff94e" (UID: "81a38bd9-d4e6-4f81-802e-9be60cfff94e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.736208 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a38bd9-d4e6-4f81-802e-9be60cfff94e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81a38bd9-d4e6-4f81-802e-9be60cfff94e" (UID: "81a38bd9-d4e6-4f81-802e-9be60cfff94e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.737516 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a38bd9-d4e6-4f81-802e-9be60cfff94e-kube-api-access-gx5l5" (OuterVolumeSpecName: "kube-api-access-gx5l5") pod "81a38bd9-d4e6-4f81-802e-9be60cfff94e" (UID: "81a38bd9-d4e6-4f81-802e-9be60cfff94e"). InnerVolumeSpecName "kube-api-access-gx5l5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789554 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-client-ca\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789613 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4bba62a-9467-4ab0-a05c-84c516e7313a-tmp\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789704 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-client-ca\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789733 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vw82g\" (UniqueName: \"kubernetes.io/projected/afdd6854-7534-47d5-86ac-08648aec89c3-kube-api-access-vw82g\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789767 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79g2g\" (UniqueName: \"kubernetes.io/projected/e4bba62a-9467-4ab0-a05c-84c516e7313a-kube-api-access-79g2g\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789817 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afdd6854-7534-47d5-86ac-08648aec89c3-serving-cert\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789843 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-proxy-ca-bundles\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789878 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-config\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789931 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4bba62a-9467-4ab0-a05c-84c516e7313a-serving-cert\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.789987 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-config\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790014 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afdd6854-7534-47d5-86ac-08648aec89c3-tmp\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790065 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81a38bd9-d4e6-4f81-802e-9be60cfff94e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790081 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a38bd9-d4e6-4f81-802e-9be60cfff94e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790094 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790106 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790119 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81a38bd9-d4e6-4f81-802e-9be60cfff94e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790132 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gx5l5\" (UniqueName: \"kubernetes.io/projected/81a38bd9-d4e6-4f81-802e-9be60cfff94e-kube-api-access-gx5l5\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.790728 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afdd6854-7534-47d5-86ac-08648aec89c3-tmp\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.792645 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-client-ca\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.794080 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-config\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.802186 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afdd6854-7534-47d5-86ac-08648aec89c3-serving-cert\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.820819 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw82g\" (UniqueName: \"kubernetes.io/projected/afdd6854-7534-47d5-86ac-08648aec89c3-kube-api-access-vw82g\") pod \"route-controller-manager-786b5d84c9-x6jpq\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.890534 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-client-ca\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.890606 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-79g2g\" (UniqueName: \"kubernetes.io/projected/e4bba62a-9467-4ab0-a05c-84c516e7313a-kube-api-access-79g2g\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.891044 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-proxy-ca-bundles\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.891135 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-config\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.891195 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4bba62a-9467-4ab0-a05c-84c516e7313a-serving-cert\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.891408 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4bba62a-9467-4ab0-a05c-84c516e7313a-tmp\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.892182 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4bba62a-9467-4ab0-a05c-84c516e7313a-tmp\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.892421 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-client-ca\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.893949 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-config\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.895669 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4bba62a-9467-4ab0-a05c-84c516e7313a-serving-cert\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.913743 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-79g2g\" (UniqueName: \"kubernetes.io/projected/e4bba62a-9467-4ab0-a05c-84c516e7313a-kube-api-access-79g2g\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:47 crc kubenswrapper[5116]: I1208 17:44:47.939669 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-proxy-ca-bundles\") pod \"controller-manager-6f64f4648c-tg69j\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.047149 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2qkz" event={"ID":"7d3964d8-860a-448a-ba5c-309e5343333e","Type":"ContainerStarted","Data":"208b1276c760bcf7429742d0f17cbd7ab119b7b789f9630446eca88427720a21"} Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.052873 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pqqp" event={"ID":"ab873de1-8a57-4411-a552-1567537bdc67","Type":"ContainerStarted","Data":"1d87b3dfeb1c61b3b1e8332b268402ea1366d39a04a2d3c5c79986b2a82844d7"} Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.057045 5116 generic.go:358] "Generic (PLEG): container finished" podID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerID="b252ad98e404697abd089451f00b97b67a8626fe90380d36b5d7f40ffcc146b9" exitCode=0 Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.057123 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnq4b" event={"ID":"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5","Type":"ContainerDied","Data":"b252ad98e404697abd089451f00b97b67a8626fe90380d36b5d7f40ffcc146b9"} Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.067652 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerStarted","Data":"499e6ca5edb8c4d5175a2a9602e68db6a6ace9bd9a386e6ee1f6a1899481b83b"} Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.079496 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvhzm" event={"ID":"d7968a24-caaf-4115-992d-3678c03e895a","Type":"ContainerStarted","Data":"2a3175be59ba111f8928d63fbef1cbf6bf1d9204be367f6ae8c56d710b262a1a"} Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.082934 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc4fk" event={"ID":"b15bd0e2-4143-436c-8dc2-0fc2e33cef62","Type":"ContainerStarted","Data":"583a5e3fce9b33b3dc2f2446f5b35c3e3da2fad48429f93c4fa340739dcc6f82"} Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.083019 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.083344 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.092488 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.096764 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f2qkz" podStartSLOduration=13.103223519 podStartE2EDuration="41.096742943s" podCreationTimestamp="2025-12-08 17:44:07 +0000 UTC" firstStartedPulling="2025-12-08 17:44:13.14119828 +0000 UTC m=+122.938321514" lastFinishedPulling="2025-12-08 17:44:41.134717704 +0000 UTC m=+150.931840938" observedRunningTime="2025-12-08 17:44:48.093187249 +0000 UTC m=+157.890310483" watchObservedRunningTime="2025-12-08 17:44:48.096742943 +0000 UTC m=+157.893866177" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.123492 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.158667 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7rqg8" podStartSLOduration=13.425754973 podStartE2EDuration="43.158652908s" podCreationTimestamp="2025-12-08 17:44:05 +0000 UTC" firstStartedPulling="2025-12-08 17:44:11.713843073 +0000 UTC m=+121.510966297" lastFinishedPulling="2025-12-08 17:44:41.446740998 +0000 UTC m=+151.243864232" observedRunningTime="2025-12-08 17:44:48.115224468 +0000 UTC m=+157.912347702" watchObservedRunningTime="2025-12-08 17:44:48.158652908 +0000 UTC m=+157.955776142" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.160270 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mvhzm" podStartSLOduration=13.592443331 podStartE2EDuration="43.16026441s" podCreationTimestamp="2025-12-08 17:44:05 +0000 UTC" firstStartedPulling="2025-12-08 17:44:11.857906907 +0000 UTC m=+121.655030141" lastFinishedPulling="2025-12-08 17:44:41.425727986 +0000 UTC m=+151.222851220" observedRunningTime="2025-12-08 17:44:48.1583829 +0000 UTC m=+157.955506154" watchObservedRunningTime="2025-12-08 17:44:48.16026441 +0000 UTC m=+157.957387644" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.171850 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.171905 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.408535 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.408746 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.426502 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4pqqp" podStartSLOduration=13.138336996 podStartE2EDuration="41.426483411s" podCreationTimestamp="2025-12-08 17:44:07 +0000 UTC" firstStartedPulling="2025-12-08 17:44:13.158538421 +0000 UTC m=+122.955661655" lastFinishedPulling="2025-12-08 17:44:41.446684836 +0000 UTC m=+151.243808070" observedRunningTime="2025-12-08 17:44:48.421873078 +0000 UTC m=+158.218996312" watchObservedRunningTime="2025-12-08 17:44:48.426483411 +0000 UTC m=+158.223606645" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.462881 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=7.462844792 podStartE2EDuration="7.462844792s" podCreationTimestamp="2025-12-08 17:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:48.447353768 +0000 UTC m=+158.244477012" watchObservedRunningTime="2025-12-08 17:44:48.462844792 +0000 UTC m=+158.259968026" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.496917 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894"] Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.501179 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcd699cc-88894"] Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.518009 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm"] Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.520156 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78fb99b7f7-d4qxm"] Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.540996 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nc4fk" podStartSLOduration=14.973797664 podStartE2EDuration="44.540968899s" podCreationTimestamp="2025-12-08 17:44:04 +0000 UTC" firstStartedPulling="2025-12-08 17:44:11.862681902 +0000 UTC m=+121.659805136" lastFinishedPulling="2025-12-08 17:44:41.429853137 +0000 UTC m=+151.226976371" observedRunningTime="2025-12-08 17:44:48.536003216 +0000 UTC m=+158.333126450" watchObservedRunningTime="2025-12-08 17:44:48.540968899 +0000 UTC m=+158.338092133" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.710604 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20cb252f-d1e2-47a3-8655-c85d0ba4378e" path="/var/lib/kubelet/pods/20cb252f-d1e2-47a3-8655-c85d0ba4378e/volumes" Dec 08 17:44:48 crc kubenswrapper[5116]: I1208 17:44:48.712018 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a38bd9-d4e6-4f81-802e-9be60cfff94e" path="/var/lib/kubelet/pods/81a38bd9-d4e6-4f81-802e-9be60cfff94e/volumes" Dec 08 17:44:49 crc kubenswrapper[5116]: I1208 17:44:49.151946 5116 generic.go:358] "Generic (PLEG): container finished" podID="088af58f-5679-42e6-9595-945ee162f862" containerID="7d5bf1c127c46e8a507bd1cd59bee2653a692d969941694d3226047952bca532" exitCode=0 Dec 08 17:44:49 crc kubenswrapper[5116]: I1208 17:44:49.152165 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2rd" event={"ID":"088af58f-5679-42e6-9595-945ee162f862","Type":"ContainerDied","Data":"7d5bf1c127c46e8a507bd1cd59bee2653a692d969941694d3226047952bca532"} Dec 08 17:44:49 crc kubenswrapper[5116]: I1208 17:44:49.173381 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"536aba6a-5306-4082-a15f-1cedcb79625b","Type":"ContainerStarted","Data":"d3127f937d3db0a2eaac946001fcfc04e8d6ce55391f9e32906ba41ea753d652"} Dec 08 17:44:49 crc kubenswrapper[5116]: I1208 17:44:49.262367 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq"] Dec 08 17:44:49 crc kubenswrapper[5116]: W1208 17:44:49.282997 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafdd6854_7534_47d5_86ac_08648aec89c3.slice/crio-0426f33505a3c1e0f20ddc0f20fe6b55cd0990ed779fb6a074003ae4363ec36a WatchSource:0}: Error finding container 0426f33505a3c1e0f20ddc0f20fe6b55cd0990ed779fb6a074003ae4363ec36a: Status 404 returned error can't find the container with id 0426f33505a3c1e0f20ddc0f20fe6b55cd0990ed779fb6a074003ae4363ec36a Dec 08 17:44:49 crc kubenswrapper[5116]: I1208 17:44:49.326712 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f64f4648c-tg69j"] Dec 08 17:44:49 crc kubenswrapper[5116]: W1208 17:44:49.334048 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4bba62a_9467_4ab0_a05c_84c516e7313a.slice/crio-d5d178a70dd9d8460c9055516c41fc81f9fe039d804c471bd0a04d022bd1a0b9 WatchSource:0}: Error finding container d5d178a70dd9d8460c9055516c41fc81f9fe039d804c471bd0a04d022bd1a0b9: Status 404 returned error can't find the container with id d5d178a70dd9d8460c9055516c41fc81f9fe039d804c471bd0a04d022bd1a0b9 Dec 08 17:44:49 crc kubenswrapper[5116]: I1208 17:44:49.542986 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4pqqp" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="registry-server" probeResult="failure" output=< Dec 08 17:44:49 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 08 17:44:49 crc kubenswrapper[5116]: > Dec 08 17:44:49 crc kubenswrapper[5116]: I1208 17:44:49.568496 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-f2qkz" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="registry-server" probeResult="failure" output=< Dec 08 17:44:49 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 08 17:44:49 crc kubenswrapper[5116]: > Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.351958 5116 generic.go:358] "Generic (PLEG): container finished" podID="536aba6a-5306-4082-a15f-1cedcb79625b" containerID="d3127f937d3db0a2eaac946001fcfc04e8d6ce55391f9e32906ba41ea753d652" exitCode=0 Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.353135 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"536aba6a-5306-4082-a15f-1cedcb79625b","Type":"ContainerDied","Data":"d3127f937d3db0a2eaac946001fcfc04e8d6ce55391f9e32906ba41ea753d652"} Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.356299 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" event={"ID":"e4bba62a-9467-4ab0-a05c-84c516e7313a","Type":"ContainerStarted","Data":"d5d178a70dd9d8460c9055516c41fc81f9fe039d804c471bd0a04d022bd1a0b9"} Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.357634 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" event={"ID":"afdd6854-7534-47d5-86ac-08648aec89c3","Type":"ContainerStarted","Data":"0426f33505a3c1e0f20ddc0f20fe6b55cd0990ed779fb6a074003ae4363ec36a"} Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.361751 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnq4b" event={"ID":"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5","Type":"ContainerStarted","Data":"215fefac8458ae8270ff609404abd09f83a6bbb6ff9566788cc0593c35b8a6a5"} Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.439053 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bnq4b" podStartSLOduration=16.451974106 podStartE2EDuration="46.438984153s" podCreationTimestamp="2025-12-08 17:44:04 +0000 UTC" firstStartedPulling="2025-12-08 17:44:11.613809976 +0000 UTC m=+121.410933210" lastFinishedPulling="2025-12-08 17:44:41.600820033 +0000 UTC m=+151.397943257" observedRunningTime="2025-12-08 17:44:50.435847149 +0000 UTC m=+160.232970383" watchObservedRunningTime="2025-12-08 17:44:50.438984153 +0000 UTC m=+160.236107387" Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.982073 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:50 crc kubenswrapper[5116]: I1208 17:44:50.982881 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.396390 5116 generic.go:358] "Generic (PLEG): container finished" podID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerID="bd5c78f9040b7dfef67c8489dee5cf56092c08b5e6aa74bad580a022e6420962" exitCode=0 Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.396495 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdgp8" event={"ID":"5bc57600-20de-4fda-ba78-b05d745b08d6","Type":"ContainerDied","Data":"bd5c78f9040b7dfef67c8489dee5cf56092c08b5e6aa74bad580a022e6420962"} Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.406973 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" event={"ID":"e4bba62a-9467-4ab0-a05c-84c516e7313a","Type":"ContainerStarted","Data":"5a19966bbd7c12775e914eedfc9c7e29fe065b2e1ea56dd7441c743968727ddb"} Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.408435 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.430344 5116 patch_prober.go:28] interesting pod/controller-manager-6f64f4648c-tg69j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.430417 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" podUID="e4bba62a-9467-4ab0-a05c-84c516e7313a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.447474 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" event={"ID":"afdd6854-7534-47d5-86ac-08648aec89c3","Type":"ContainerStarted","Data":"99fc5b8b099b68f85f4d6c4241f2eecb5dfc45401178951390cd7537df63870e"} Dec 08 17:44:51 crc kubenswrapper[5116]: I1208 17:44:51.502070 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2rd" event={"ID":"088af58f-5679-42e6-9595-945ee162f862","Type":"ContainerStarted","Data":"d1d9eaee655c81adb5ab40ccf7e4a7aaa9a5293ffd7d5cdfbdf7da45c738cdf1"} Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.127988 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7j2rd" podStartSLOduration=16.825015884 podStartE2EDuration="45.127968635s" podCreationTimestamp="2025-12-08 17:44:07 +0000 UTC" firstStartedPulling="2025-12-08 17:44:13.144414683 +0000 UTC m=+122.941537917" lastFinishedPulling="2025-12-08 17:44:41.447367434 +0000 UTC m=+151.244490668" observedRunningTime="2025-12-08 17:44:52.126200857 +0000 UTC m=+161.923324111" watchObservedRunningTime="2025-12-08 17:44:52.127968635 +0000 UTC m=+161.925091869" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.131404 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" podStartSLOduration=15.131387166 podStartE2EDuration="15.131387166s" podCreationTimestamp="2025-12-08 17:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:51.48868871 +0000 UTC m=+161.285811974" watchObservedRunningTime="2025-12-08 17:44:52.131387166 +0000 UTC m=+161.928510400" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.158167 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.163934 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" podStartSLOduration=15.163891194 podStartE2EDuration="15.163891194s" podCreationTimestamp="2025-12-08 17:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:52.161263623 +0000 UTC m=+161.958386877" watchObservedRunningTime="2025-12-08 17:44:52.163891194 +0000 UTC m=+161.961014428" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.284277 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/536aba6a-5306-4082-a15f-1cedcb79625b-kube-api-access\") pod \"536aba6a-5306-4082-a15f-1cedcb79625b\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.284379 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/536aba6a-5306-4082-a15f-1cedcb79625b-kubelet-dir\") pod \"536aba6a-5306-4082-a15f-1cedcb79625b\" (UID: \"536aba6a-5306-4082-a15f-1cedcb79625b\") " Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.284863 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/536aba6a-5306-4082-a15f-1cedcb79625b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "536aba6a-5306-4082-a15f-1cedcb79625b" (UID: "536aba6a-5306-4082-a15f-1cedcb79625b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.301975 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536aba6a-5306-4082-a15f-1cedcb79625b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "536aba6a-5306-4082-a15f-1cedcb79625b" (UID: "536aba6a-5306-4082-a15f-1cedcb79625b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.414680 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/536aba6a-5306-4082-a15f-1cedcb79625b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.416187 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/536aba6a-5306-4082-a15f-1cedcb79625b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.724064 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"536aba6a-5306-4082-a15f-1cedcb79625b","Type":"ContainerDied","Data":"15606af95be2bff65eecfaeb6c8ce4a348722f59b3ccb62aff6776dc2ba8590a"} Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.724162 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15606af95be2bff65eecfaeb6c8ce4a348722f59b3ccb62aff6776dc2ba8590a" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.724214 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.724767 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.734136 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:44:52 crc kubenswrapper[5116]: I1208 17:44:52.803382 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:44:53 crc kubenswrapper[5116]: I1208 17:44:53.735757 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdgp8" event={"ID":"5bc57600-20de-4fda-ba78-b05d745b08d6","Type":"ContainerStarted","Data":"498a9d0ca704dfab3a9b9a84bb9ccac1c08d3e14b6308df6ac39be09a080047b"} Dec 08 17:44:53 crc kubenswrapper[5116]: I1208 17:44:53.880840 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:44:53 crc kubenswrapper[5116]: I1208 17:44:53.880964 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:44:54 crc kubenswrapper[5116]: E1208 17:44:54.590904 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-conmon-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache]" Dec 08 17:44:54 crc kubenswrapper[5116]: I1208 17:44:54.767752 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdgp8" podStartSLOduration=19.492643866 podStartE2EDuration="47.76773102s" podCreationTimestamp="2025-12-08 17:44:07 +0000 UTC" firstStartedPulling="2025-12-08 17:44:13.172367392 +0000 UTC m=+122.969490626" lastFinishedPulling="2025-12-08 17:44:41.447454546 +0000 UTC m=+151.244577780" observedRunningTime="2025-12-08 17:44:54.764358801 +0000 UTC m=+164.561482035" watchObservedRunningTime="2025-12-08 17:44:54.76773102 +0000 UTC m=+164.564854244" Dec 08 17:44:55 crc kubenswrapper[5116]: I1208 17:44:55.014328 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:44:55 crc kubenswrapper[5116]: I1208 17:44:55.015580 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="536aba6a-5306-4082-a15f-1cedcb79625b" containerName="pruner" Dec 08 17:44:55 crc kubenswrapper[5116]: I1208 17:44:55.015723 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="536aba6a-5306-4082-a15f-1cedcb79625b" containerName="pruner" Dec 08 17:44:55 crc kubenswrapper[5116]: I1208 17:44:55.015940 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="536aba6a-5306-4082-a15f-1cedcb79625b" containerName="pruner" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.330216 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.330631 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.335127 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.336634 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.463583 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.463950 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-var-lock\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.464077 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kube-api-access\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.565845 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-var-lock\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.565997 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kube-api-access\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.566025 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-var-lock\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.566136 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.566256 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.589499 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kube-api-access\") pod \"installer-12-crc\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.596773 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.597687 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:56 crc kubenswrapper[5116]: I1208 17:44:56.661821 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.105859 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.109880 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.110909 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.113511 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.113563 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.113578 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.113589 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.491144 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.529475 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.554184 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:44:57 crc kubenswrapper[5116]: I1208 17:44:57.609666 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.014683 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.163150 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c","Type":"ContainerStarted","Data":"269b3e03945127752b3b619e6ec29d010e6a342e12529a49d1e4d161d5be0853"} Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.280630 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.295562 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.295879 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.393157 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.497502 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:58 crc kubenswrapper[5116]: I1208 17:44:58.497575 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:44:59 crc kubenswrapper[5116]: I1208 17:44:59.371214 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j2rd" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="registry-server" probeResult="failure" output=< Dec 08 17:44:59 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 08 17:44:59 crc kubenswrapper[5116]: > Dec 08 17:44:59 crc kubenswrapper[5116]: I1208 17:44:59.552530 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdgp8" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="registry-server" probeResult="failure" output=< Dec 08 17:44:59 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 08 17:44:59 crc kubenswrapper[5116]: > Dec 08 17:45:00 crc kubenswrapper[5116]: I1208 17:45:00.139832 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x"] Dec 08 17:45:00 crc kubenswrapper[5116]: I1208 17:45:00.981019 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:45:00 crc kubenswrapper[5116]: I1208 17:45:00.981109 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.244147 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x"] Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.244407 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.249511 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.251517 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.282638 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.293655 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.301533 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.398466 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cbc3e18-9ce3-4639-926e-510888ffb3f5-secret-volume\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.398783 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cbc3e18-9ce3-4639-926e-510888ffb3f5-config-volume\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.398981 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-995xm\" (UniqueName: \"kubernetes.io/projected/7cbc3e18-9ce3-4639-926e-510888ffb3f5-kube-api-access-995xm\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.434127 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.499471 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.503937 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cbc3e18-9ce3-4639-926e-510888ffb3f5-secret-volume\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.504088 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cbc3e18-9ce3-4639-926e-510888ffb3f5-config-volume\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.504216 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-995xm\" (UniqueName: \"kubernetes.io/projected/7cbc3e18-9ce3-4639-926e-510888ffb3f5-kube-api-access-995xm\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.506475 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cbc3e18-9ce3-4639-926e-510888ffb3f5-config-volume\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.517495 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cbc3e18-9ce3-4639-926e-510888ffb3f5-secret-volume\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.536509 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-995xm\" (UniqueName: \"kubernetes.io/projected/7cbc3e18-9ce3-4639-926e-510888ffb3f5-kube-api-access-995xm\") pod \"collect-profiles-29420265-j8t2x\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:01 crc kubenswrapper[5116]: I1208 17:45:01.620609 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:02 crc kubenswrapper[5116]: I1208 17:45:02.023496 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rqg8"] Dec 08 17:45:02 crc kubenswrapper[5116]: I1208 17:45:02.368007 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x"] Dec 08 17:45:02 crc kubenswrapper[5116]: W1208 17:45:02.374760 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cbc3e18_9ce3_4639_926e_510888ffb3f5.slice/crio-480b32f04c38804b53124c9425ebf657b8ea706537be1a9216b11dcb7ec14d05 WatchSource:0}: Error finding container 480b32f04c38804b53124c9425ebf657b8ea706537be1a9216b11dcb7ec14d05: Status 404 returned error can't find the container with id 480b32f04c38804b53124c9425ebf657b8ea706537be1a9216b11dcb7ec14d05 Dec 08 17:45:02 crc kubenswrapper[5116]: I1208 17:45:02.827538 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" event={"ID":"7cbc3e18-9ce3-4639-926e-510888ffb3f5","Type":"ContainerStarted","Data":"480b32f04c38804b53124c9425ebf657b8ea706537be1a9216b11dcb7ec14d05"} Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.395307 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7rqg8" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="registry-server" containerID="cri-o://499e6ca5edb8c4d5175a2a9602e68db6a6ace9bd9a386e6ee1f6a1899481b83b" gracePeriod=2 Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.822490 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvhzm"] Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.856228 5116 generic.go:358] "Generic (PLEG): container finished" podID="0687a333-2a42-4237-9673-e0210c45dc22" containerID="499e6ca5edb8c4d5175a2a9602e68db6a6ace9bd9a386e6ee1f6a1899481b83b" exitCode=0 Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.856513 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerDied","Data":"499e6ca5edb8c4d5175a2a9602e68db6a6ace9bd9a386e6ee1f6a1899481b83b"} Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.862198 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mvhzm" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="registry-server" containerID="cri-o://2a3175be59ba111f8928d63fbef1cbf6bf1d9204be367f6ae8c56d710b262a1a" gracePeriod=2 Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.863197 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c","Type":"ContainerStarted","Data":"7772951a45e6ef04b031e33a70588eec0efca0038f1943e22bed0c8349f3de95"} Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.880382 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=8.88036193 podStartE2EDuration="8.88036193s" podCreationTimestamp="2025-12-08 17:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:03.879072126 +0000 UTC m=+173.676195380" watchObservedRunningTime="2025-12-08 17:45:03.88036193 +0000 UTC m=+173.677485174" Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.882894 5116 patch_prober.go:28] interesting pod/downloads-747b44746d-4msk8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:45:03 crc kubenswrapper[5116]: I1208 17:45:03.882968 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-4msk8" podUID="e71c8014-5266-4483-8037-e8d9e7995c1b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.412576 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2qkz"] Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.412858 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f2qkz" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="registry-server" containerID="cri-o://208b1276c760bcf7429742d0f17cbd7ab119b7b789f9630446eca88427720a21" gracePeriod=2 Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.622836 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.752438 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-catalog-content\") pod \"0687a333-2a42-4237-9673-e0210c45dc22\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.752666 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm24p\" (UniqueName: \"kubernetes.io/projected/0687a333-2a42-4237-9673-e0210c45dc22-kube-api-access-wm24p\") pod \"0687a333-2a42-4237-9673-e0210c45dc22\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.752766 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-utilities\") pod \"0687a333-2a42-4237-9673-e0210c45dc22\" (UID: \"0687a333-2a42-4237-9673-e0210c45dc22\") " Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.755293 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-utilities" (OuterVolumeSpecName: "utilities") pod "0687a333-2a42-4237-9673-e0210c45dc22" (UID: "0687a333-2a42-4237-9673-e0210c45dc22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.768486 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0687a333-2a42-4237-9673-e0210c45dc22-kube-api-access-wm24p" (OuterVolumeSpecName: "kube-api-access-wm24p") pod "0687a333-2a42-4237-9673-e0210c45dc22" (UID: "0687a333-2a42-4237-9673-e0210c45dc22"). InnerVolumeSpecName "kube-api-access-wm24p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:04 crc kubenswrapper[5116]: E1208 17:45:04.777522 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-conmon-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968a24_caaf_4115_992d_3678c03e895a.slice/crio-8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36.scope\": RecentStats: unable to find data in memory cache]" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.788368 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0687a333-2a42-4237-9673-e0210c45dc22" (UID: "0687a333-2a42-4237-9673-e0210c45dc22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.854477 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.854754 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0687a333-2a42-4237-9673-e0210c45dc22-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.854781 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wm24p\" (UniqueName: \"kubernetes.io/projected/0687a333-2a42-4237-9673-e0210c45dc22-kube-api-access-wm24p\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.872715 5116 generic.go:358] "Generic (PLEG): container finished" podID="7cbc3e18-9ce3-4639-926e-510888ffb3f5" containerID="2252499152e7500112a6b6bbfea80c53a47c1cc42796e91c8f2334e309fde433" exitCode=0 Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.872839 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" event={"ID":"7cbc3e18-9ce3-4639-926e-510888ffb3f5","Type":"ContainerDied","Data":"2252499152e7500112a6b6bbfea80c53a47c1cc42796e91c8f2334e309fde433"} Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.877326 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqg8" event={"ID":"0687a333-2a42-4237-9673-e0210c45dc22","Type":"ContainerDied","Data":"26a25175f5f53eb64e5f23fece348d2f198029597137ba3a2c04f5277b8f9d78"} Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.877602 5116 scope.go:117] "RemoveContainer" containerID="499e6ca5edb8c4d5175a2a9602e68db6a6ace9bd9a386e6ee1f6a1899481b83b" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.877394 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqg8" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.881581 5116 generic.go:358] "Generic (PLEG): container finished" podID="d7968a24-caaf-4115-992d-3678c03e895a" containerID="2a3175be59ba111f8928d63fbef1cbf6bf1d9204be367f6ae8c56d710b262a1a" exitCode=0 Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.882804 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvhzm" event={"ID":"d7968a24-caaf-4115-992d-3678c03e895a","Type":"ContainerDied","Data":"2a3175be59ba111f8928d63fbef1cbf6bf1d9204be367f6ae8c56d710b262a1a"} Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.904358 5116 scope.go:117] "RemoveContainer" containerID="346b15c51c9a64df05e190a7a1842fdb528612e1a7c1f65abdcc8e4a14c2ca8a" Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.921676 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rqg8"] Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.925236 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7rqg8"] Dec 08 17:45:04 crc kubenswrapper[5116]: I1208 17:45:04.975656 5116 scope.go:117] "RemoveContainer" containerID="6e6e759ebee28d8375df5c08ea9996eaeda9345621733989d197eda3ee7bb30a" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.198627 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.261688 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzd4j\" (UniqueName: \"kubernetes.io/projected/d7968a24-caaf-4115-992d-3678c03e895a-kube-api-access-hzd4j\") pod \"d7968a24-caaf-4115-992d-3678c03e895a\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.261798 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-utilities\") pod \"d7968a24-caaf-4115-992d-3678c03e895a\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.261999 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-catalog-content\") pod \"d7968a24-caaf-4115-992d-3678c03e895a\" (UID: \"d7968a24-caaf-4115-992d-3678c03e895a\") " Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.263217 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-utilities" (OuterVolumeSpecName: "utilities") pod "d7968a24-caaf-4115-992d-3678c03e895a" (UID: "d7968a24-caaf-4115-992d-3678c03e895a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.268509 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7968a24-caaf-4115-992d-3678c03e895a-kube-api-access-hzd4j" (OuterVolumeSpecName: "kube-api-access-hzd4j") pod "d7968a24-caaf-4115-992d-3678c03e895a" (UID: "d7968a24-caaf-4115-992d-3678c03e895a"). InnerVolumeSpecName "kube-api-access-hzd4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.323781 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7968a24-caaf-4115-992d-3678c03e895a" (UID: "d7968a24-caaf-4115-992d-3678c03e895a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.364441 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hzd4j\" (UniqueName: \"kubernetes.io/projected/d7968a24-caaf-4115-992d-3678c03e895a-kube-api-access-hzd4j\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.364540 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.364553 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7968a24-caaf-4115-992d-3678c03e895a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.894342 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvhzm" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.894712 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvhzm" event={"ID":"d7968a24-caaf-4115-992d-3678c03e895a","Type":"ContainerDied","Data":"49dd49f4ee7254fbe0e0b7b71be5f9f3d4049b346e4f9cc1cb72ca25fae0d548"} Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.896427 5116 scope.go:117] "RemoveContainer" containerID="2a3175be59ba111f8928d63fbef1cbf6bf1d9204be367f6ae8c56d710b262a1a" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.898687 5116 generic.go:358] "Generic (PLEG): container finished" podID="7d3964d8-860a-448a-ba5c-309e5343333e" containerID="208b1276c760bcf7429742d0f17cbd7ab119b7b789f9630446eca88427720a21" exitCode=0 Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.898976 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2qkz" event={"ID":"7d3964d8-860a-448a-ba5c-309e5343333e","Type":"ContainerDied","Data":"208b1276c760bcf7429742d0f17cbd7ab119b7b789f9630446eca88427720a21"} Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.929327 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvhzm"] Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.932762 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mvhzm"] Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.938260 5116 scope.go:117] "RemoveContainer" containerID="2b4ce0c83cac6feef34082b5ad23675bdbe8b223d309d648f87e98cc4eed460b" Dec 08 17:45:05 crc kubenswrapper[5116]: I1208 17:45:05.967178 5116 scope.go:117] "RemoveContainer" containerID="8e54c54fae30e2e41368ec97f94608548b57555392e3ca341ed0d1620fa34a36" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.080503 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.177720 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-catalog-content\") pod \"7d3964d8-860a-448a-ba5c-309e5343333e\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.177793 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-utilities\") pod \"7d3964d8-860a-448a-ba5c-309e5343333e\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.178046 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbccj\" (UniqueName: \"kubernetes.io/projected/7d3964d8-860a-448a-ba5c-309e5343333e-kube-api-access-wbccj\") pod \"7d3964d8-860a-448a-ba5c-309e5343333e\" (UID: \"7d3964d8-860a-448a-ba5c-309e5343333e\") " Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.178992 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-utilities" (OuterVolumeSpecName: "utilities") pod "7d3964d8-860a-448a-ba5c-309e5343333e" (UID: "7d3964d8-860a-448a-ba5c-309e5343333e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.191960 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3964d8-860a-448a-ba5c-309e5343333e-kube-api-access-wbccj" (OuterVolumeSpecName: "kube-api-access-wbccj") pod "7d3964d8-860a-448a-ba5c-309e5343333e" (UID: "7d3964d8-860a-448a-ba5c-309e5343333e"). InnerVolumeSpecName "kube-api-access-wbccj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.198573 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d3964d8-860a-448a-ba5c-309e5343333e" (UID: "7d3964d8-860a-448a-ba5c-309e5343333e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.200980 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.280303 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-995xm\" (UniqueName: \"kubernetes.io/projected/7cbc3e18-9ce3-4639-926e-510888ffb3f5-kube-api-access-995xm\") pod \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.280464 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cbc3e18-9ce3-4639-926e-510888ffb3f5-secret-volume\") pod \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.280609 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cbc3e18-9ce3-4639-926e-510888ffb3f5-config-volume\") pod \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\" (UID: \"7cbc3e18-9ce3-4639-926e-510888ffb3f5\") " Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.280991 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.281087 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3964d8-860a-448a-ba5c-309e5343333e-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.281110 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbccj\" (UniqueName: \"kubernetes.io/projected/7d3964d8-860a-448a-ba5c-309e5343333e-kube-api-access-wbccj\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.281893 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbc3e18-9ce3-4639-926e-510888ffb3f5-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cbc3e18-9ce3-4639-926e-510888ffb3f5" (UID: "7cbc3e18-9ce3-4639-926e-510888ffb3f5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.286379 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cbc3e18-9ce3-4639-926e-510888ffb3f5-kube-api-access-995xm" (OuterVolumeSpecName: "kube-api-access-995xm") pod "7cbc3e18-9ce3-4639-926e-510888ffb3f5" (UID: "7cbc3e18-9ce3-4639-926e-510888ffb3f5"). InnerVolumeSpecName "kube-api-access-995xm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.289094 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3e18-9ce3-4639-926e-510888ffb3f5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cbc3e18-9ce3-4639-926e-510888ffb3f5" (UID: "7cbc3e18-9ce3-4639-926e-510888ffb3f5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.383153 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-995xm\" (UniqueName: \"kubernetes.io/projected/7cbc3e18-9ce3-4639-926e-510888ffb3f5-kube-api-access-995xm\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.383222 5116 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cbc3e18-9ce3-4639-926e-510888ffb3f5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.383262 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cbc3e18-9ce3-4639-926e-510888ffb3f5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.699894 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0687a333-2a42-4237-9673-e0210c45dc22" path="/var/lib/kubelet/pods/0687a333-2a42-4237-9673-e0210c45dc22/volumes" Dec 08 17:45:06 crc kubenswrapper[5116]: I1208 17:45:06.702697 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7968a24-caaf-4115-992d-3678c03e895a" path="/var/lib/kubelet/pods/d7968a24-caaf-4115-992d-3678c03e895a/volumes" Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.033207 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.033262 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-j8t2x" event={"ID":"7cbc3e18-9ce3-4639-926e-510888ffb3f5","Type":"ContainerDied","Data":"480b32f04c38804b53124c9425ebf657b8ea706537be1a9216b11dcb7ec14d05"} Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.033314 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="480b32f04c38804b53124c9425ebf657b8ea706537be1a9216b11dcb7ec14d05" Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.036142 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2qkz" event={"ID":"7d3964d8-860a-448a-ba5c-309e5343333e","Type":"ContainerDied","Data":"c917b2328811530035ef9e3feb724f33def27d3634d935d71ed9324dbbfa9046"} Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.036215 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2qkz" Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.036258 5116 scope.go:117] "RemoveContainer" containerID="208b1276c760bcf7429742d0f17cbd7ab119b7b789f9630446eca88427720a21" Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.065572 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2qkz"] Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.067627 5116 scope.go:117] "RemoveContainer" containerID="045cb1725f3317df0938aa5154239ba1a7815130327cbee579eed677125a8080" Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.068413 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2qkz"] Dec 08 17:45:07 crc kubenswrapper[5116]: I1208 17:45:07.085300 5116 scope.go:117] "RemoveContainer" containerID="70a963949e4a2721cd854d96d8a57b2a91decac06bd83549306df12fa3372a32" Dec 08 17:45:08 crc kubenswrapper[5116]: I1208 17:45:08.348493 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:45:08 crc kubenswrapper[5116]: I1208 17:45:08.412279 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:45:08 crc kubenswrapper[5116]: I1208 17:45:08.774328 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" path="/var/lib/kubelet/pods/7d3964d8-860a-448a-ba5c-309e5343333e/volumes" Dec 08 17:45:08 crc kubenswrapper[5116]: I1208 17:45:08.779075 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:45:08 crc kubenswrapper[5116]: I1208 17:45:08.829287 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:45:10 crc kubenswrapper[5116]: I1208 17:45:10.813906 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdgp8"] Dec 08 17:45:10 crc kubenswrapper[5116]: I1208 17:45:10.814608 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdgp8" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="registry-server" containerID="cri-o://498a9d0ca704dfab3a9b9a84bb9ccac1c08d3e14b6308df6ac39be09a080047b" gracePeriod=2 Dec 08 17:45:11 crc kubenswrapper[5116]: I1208 17:45:11.663898 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qnwj9"] Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.095818 5116 generic.go:358] "Generic (PLEG): container finished" podID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerID="498a9d0ca704dfab3a9b9a84bb9ccac1c08d3e14b6308df6ac39be09a080047b" exitCode=0 Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.095896 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdgp8" event={"ID":"5bc57600-20de-4fda-ba78-b05d745b08d6","Type":"ContainerDied","Data":"498a9d0ca704dfab3a9b9a84bb9ccac1c08d3e14b6308df6ac39be09a080047b"} Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.600301 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.754820 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-utilities\") pod \"5bc57600-20de-4fda-ba78-b05d745b08d6\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.754861 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-catalog-content\") pod \"5bc57600-20de-4fda-ba78-b05d745b08d6\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.755010 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdgss\" (UniqueName: \"kubernetes.io/projected/5bc57600-20de-4fda-ba78-b05d745b08d6-kube-api-access-hdgss\") pod \"5bc57600-20de-4fda-ba78-b05d745b08d6\" (UID: \"5bc57600-20de-4fda-ba78-b05d745b08d6\") " Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.757192 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-utilities" (OuterVolumeSpecName: "utilities") pod "5bc57600-20de-4fda-ba78-b05d745b08d6" (UID: "5bc57600-20de-4fda-ba78-b05d745b08d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.767589 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc57600-20de-4fda-ba78-b05d745b08d6-kube-api-access-hdgss" (OuterVolumeSpecName: "kube-api-access-hdgss") pod "5bc57600-20de-4fda-ba78-b05d745b08d6" (UID: "5bc57600-20de-4fda-ba78-b05d745b08d6"). InnerVolumeSpecName "kube-api-access-hdgss". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.844572 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bc57600-20de-4fda-ba78-b05d745b08d6" (UID: "5bc57600-20de-4fda-ba78-b05d745b08d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.856317 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hdgss\" (UniqueName: \"kubernetes.io/projected/5bc57600-20de-4fda-ba78-b05d745b08d6-kube-api-access-hdgss\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.856356 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:12 crc kubenswrapper[5116]: I1208 17:45:12.856371 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc57600-20de-4fda-ba78-b05d745b08d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.104031 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdgp8" Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.104023 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdgp8" event={"ID":"5bc57600-20de-4fda-ba78-b05d745b08d6","Type":"ContainerDied","Data":"f9f64e7e995c3bbb4a27a33cd239473cd09b9b79a4b47d1c2ea3f14ab93d1671"} Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.104288 5116 scope.go:117] "RemoveContainer" containerID="498a9d0ca704dfab3a9b9a84bb9ccac1c08d3e14b6308df6ac39be09a080047b" Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.193286 5116 scope.go:117] "RemoveContainer" containerID="bd5c78f9040b7dfef67c8489dee5cf56092c08b5e6aa74bad580a022e6420962" Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.210069 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdgp8"] Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.216074 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdgp8"] Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.231466 5116 scope.go:117] "RemoveContainer" containerID="894f71698a4e44732cabc082a8c1db687144093c55d14026e625e6006ab64b2e" Dec 08 17:45:13 crc kubenswrapper[5116]: I1208 17:45:13.883950 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-4msk8" Dec 08 17:45:14 crc kubenswrapper[5116]: I1208 17:45:14.688661 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" path="/var/lib/kubelet/pods/5bc57600-20de-4fda-ba78-b05d745b08d6/volumes" Dec 08 17:45:17 crc kubenswrapper[5116]: I1208 17:45:17.024802 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f64f4648c-tg69j"] Dec 08 17:45:17 crc kubenswrapper[5116]: I1208 17:45:17.026987 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" podUID="e4bba62a-9467-4ab0-a05c-84c516e7313a" containerName="controller-manager" containerID="cri-o://5a19966bbd7c12775e914eedfc9c7e29fe065b2e1ea56dd7441c743968727ddb" gracePeriod=30 Dec 08 17:45:17 crc kubenswrapper[5116]: I1208 17:45:17.063307 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq"] Dec 08 17:45:17 crc kubenswrapper[5116]: I1208 17:45:17.063673 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" podUID="afdd6854-7534-47d5-86ac-08648aec89c3" containerName="route-controller-manager" containerID="cri-o://99fc5b8b099b68f85f4d6c4241f2eecb5dfc45401178951390cd7537df63870e" gracePeriod=30 Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.155357 5116 generic.go:358] "Generic (PLEG): container finished" podID="e4bba62a-9467-4ab0-a05c-84c516e7313a" containerID="5a19966bbd7c12775e914eedfc9c7e29fe065b2e1ea56dd7441c743968727ddb" exitCode=0 Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.155415 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" event={"ID":"e4bba62a-9467-4ab0-a05c-84c516e7313a","Type":"ContainerDied","Data":"5a19966bbd7c12775e914eedfc9c7e29fe065b2e1ea56dd7441c743968727ddb"} Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.158034 5116 generic.go:358] "Generic (PLEG): container finished" podID="afdd6854-7534-47d5-86ac-08648aec89c3" containerID="99fc5b8b099b68f85f4d6c4241f2eecb5dfc45401178951390cd7537df63870e" exitCode=0 Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.158128 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" event={"ID":"afdd6854-7534-47d5-86ac-08648aec89c3","Type":"ContainerDied","Data":"99fc5b8b099b68f85f4d6c4241f2eecb5dfc45401178951390cd7537df63870e"} Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.367839 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.413936 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc"] Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414735 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414760 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414773 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414779 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414788 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414795 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414805 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414811 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414819 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414824 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414833 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7cbc3e18-9ce3-4639-926e-510888ffb3f5" containerName="collect-profiles" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414838 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cbc3e18-9ce3-4639-926e-510888ffb3f5" containerName="collect-profiles" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414847 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414854 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414864 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414875 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414889 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414897 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414909 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414915 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414923 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414930 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="extract-content" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414942 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414949 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414961 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afdd6854-7534-47d5-86ac-08648aec89c3" containerName="route-controller-manager" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414968 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="afdd6854-7534-47d5-86ac-08648aec89c3" containerName="route-controller-manager" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414982 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.414988 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="extract-utilities" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.415078 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="afdd6854-7534-47d5-86ac-08648aec89c3" containerName="route-controller-manager" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.415090 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="7cbc3e18-9ce3-4639-926e-510888ffb3f5" containerName="collect-profiles" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.415102 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="0687a333-2a42-4237-9673-e0210c45dc22" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.415109 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d3964d8-860a-448a-ba5c-309e5343333e" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.415119 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bc57600-20de-4fda-ba78-b05d745b08d6" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.415127 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7968a24-caaf-4115-992d-3678c03e895a" containerName="registry-server" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.474778 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afdd6854-7534-47d5-86ac-08648aec89c3-tmp\") pod \"afdd6854-7534-47d5-86ac-08648aec89c3\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.475008 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afdd6854-7534-47d5-86ac-08648aec89c3-serving-cert\") pod \"afdd6854-7534-47d5-86ac-08648aec89c3\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.475078 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-client-ca\") pod \"afdd6854-7534-47d5-86ac-08648aec89c3\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.475097 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-config\") pod \"afdd6854-7534-47d5-86ac-08648aec89c3\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.475942 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "afdd6854-7534-47d5-86ac-08648aec89c3" (UID: "afdd6854-7534-47d5-86ac-08648aec89c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.476099 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-config" (OuterVolumeSpecName: "config") pod "afdd6854-7534-47d5-86ac-08648aec89c3" (UID: "afdd6854-7534-47d5-86ac-08648aec89c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.475204 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw82g\" (UniqueName: \"kubernetes.io/projected/afdd6854-7534-47d5-86ac-08648aec89c3-kube-api-access-vw82g\") pod \"afdd6854-7534-47d5-86ac-08648aec89c3\" (UID: \"afdd6854-7534-47d5-86ac-08648aec89c3\") " Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.476398 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afdd6854-7534-47d5-86ac-08648aec89c3-tmp" (OuterVolumeSpecName: "tmp") pod "afdd6854-7534-47d5-86ac-08648aec89c3" (UID: "afdd6854-7534-47d5-86ac-08648aec89c3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.476771 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afdd6854-7534-47d5-86ac-08648aec89c3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.476815 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.476825 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afdd6854-7534-47d5-86ac-08648aec89c3-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.490403 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afdd6854-7534-47d5-86ac-08648aec89c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afdd6854-7534-47d5-86ac-08648aec89c3" (UID: "afdd6854-7534-47d5-86ac-08648aec89c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.491235 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afdd6854-7534-47d5-86ac-08648aec89c3-kube-api-access-vw82g" (OuterVolumeSpecName: "kube-api-access-vw82g") pod "afdd6854-7534-47d5-86ac-08648aec89c3" (UID: "afdd6854-7534-47d5-86ac-08648aec89c3"). InnerVolumeSpecName "kube-api-access-vw82g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.578121 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vw82g\" (UniqueName: \"kubernetes.io/projected/afdd6854-7534-47d5-86ac-08648aec89c3-kube-api-access-vw82g\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:19 crc kubenswrapper[5116]: I1208 17:45:19.578155 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afdd6854-7534-47d5-86ac-08648aec89c3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.206237 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.207804 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.209082 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq" event={"ID":"afdd6854-7534-47d5-86ac-08648aec89c3","Type":"ContainerDied","Data":"0426f33505a3c1e0f20ddc0f20fe6b55cd0990ed779fb6a074003ae4363ec36a"} Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.209131 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc"] Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.209160 5116 scope.go:117] "RemoveContainer" containerID="99fc5b8b099b68f85f4d6c4241f2eecb5dfc45401178951390cd7537df63870e" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.260654 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq"] Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.263560 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b5d84c9-x6jpq"] Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.287221 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-config\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.287655 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-client-ca\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.288092 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-serving-cert\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.288188 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflsb\" (UniqueName: \"kubernetes.io/projected/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-kube-api-access-kflsb\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.288435 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-tmp\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.389784 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-config\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.389895 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-client-ca\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.391304 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-client-ca\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.391345 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-config\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.391447 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-serving-cert\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.391595 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kflsb\" (UniqueName: \"kubernetes.io/projected/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-kube-api-access-kflsb\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.391797 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-tmp\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.392550 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-tmp\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.401679 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-serving-cert\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.432354 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflsb\" (UniqueName: \"kubernetes.io/projected/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-kube-api-access-kflsb\") pod \"route-controller-manager-5f94686f44-6f5wc\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.554118 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:20 crc kubenswrapper[5116]: I1208 17:45:20.689963 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afdd6854-7534-47d5-86ac-08648aec89c3" path="/var/lib/kubelet/pods/afdd6854-7534-47d5-86ac-08648aec89c3/volumes" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.026027 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.062437 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k"] Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063007 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4bba62a-9467-4ab0-a05c-84c516e7313a" containerName="controller-manager" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063024 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bba62a-9467-4ab0-a05c-84c516e7313a" containerName="controller-manager" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063121 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4bba62a-9467-4ab0-a05c-84c516e7313a" containerName="controller-manager" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063500 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-client-ca\") pod \"e4bba62a-9467-4ab0-a05c-84c516e7313a\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063600 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-proxy-ca-bundles\") pod \"e4bba62a-9467-4ab0-a05c-84c516e7313a\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063634 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79g2g\" (UniqueName: \"kubernetes.io/projected/e4bba62a-9467-4ab0-a05c-84c516e7313a-kube-api-access-79g2g\") pod \"e4bba62a-9467-4ab0-a05c-84c516e7313a\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063702 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4bba62a-9467-4ab0-a05c-84c516e7313a-serving-cert\") pod \"e4bba62a-9467-4ab0-a05c-84c516e7313a\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063739 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-config\") pod \"e4bba62a-9467-4ab0-a05c-84c516e7313a\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.063758 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4bba62a-9467-4ab0-a05c-84c516e7313a-tmp\") pod \"e4bba62a-9467-4ab0-a05c-84c516e7313a\" (UID: \"e4bba62a-9467-4ab0-a05c-84c516e7313a\") " Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.064424 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bba62a-9467-4ab0-a05c-84c516e7313a-tmp" (OuterVolumeSpecName: "tmp") pod "e4bba62a-9467-4ab0-a05c-84c516e7313a" (UID: "e4bba62a-9467-4ab0-a05c-84c516e7313a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.065539 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-client-ca" (OuterVolumeSpecName: "client-ca") pod "e4bba62a-9467-4ab0-a05c-84c516e7313a" (UID: "e4bba62a-9467-4ab0-a05c-84c516e7313a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.066044 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-config" (OuterVolumeSpecName: "config") pod "e4bba62a-9467-4ab0-a05c-84c516e7313a" (UID: "e4bba62a-9467-4ab0-a05c-84c516e7313a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.068731 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e4bba62a-9467-4ab0-a05c-84c516e7313a" (UID: "e4bba62a-9467-4ab0-a05c-84c516e7313a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.072571 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bba62a-9467-4ab0-a05c-84c516e7313a-kube-api-access-79g2g" (OuterVolumeSpecName: "kube-api-access-79g2g") pod "e4bba62a-9467-4ab0-a05c-84c516e7313a" (UID: "e4bba62a-9467-4ab0-a05c-84c516e7313a"). InnerVolumeSpecName "kube-api-access-79g2g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.075825 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bba62a-9467-4ab0-a05c-84c516e7313a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e4bba62a-9467-4ab0-a05c-84c516e7313a" (UID: "e4bba62a-9467-4ab0-a05c-84c516e7313a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:21 crc kubenswrapper[5116]: W1208 17:45:21.115037 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfd5a096_7e6b_4fdf_9f61_9b08ec34fe2d.slice/crio-91f4f13714a8b624a3cf348b63a1d8371fb691834777be0ac89fc91146080ee4 WatchSource:0}: Error finding container 91f4f13714a8b624a3cf348b63a1d8371fb691834777be0ac89fc91146080ee4: Status 404 returned error can't find the container with id 91f4f13714a8b624a3cf348b63a1d8371fb691834777be0ac89fc91146080ee4 Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.165239 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.165300 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79g2g\" (UniqueName: \"kubernetes.io/projected/e4bba62a-9467-4ab0-a05c-84c516e7313a-kube-api-access-79g2g\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.165319 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4bba62a-9467-4ab0-a05c-84c516e7313a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.165331 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.165344 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4bba62a-9467-4ab0-a05c-84c516e7313a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.165354 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4bba62a-9467-4ab0-a05c-84c516e7313a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.597308 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k"] Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.597366 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" event={"ID":"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d","Type":"ContainerStarted","Data":"91f4f13714a8b624a3cf348b63a1d8371fb691834777be0ac89fc91146080ee4"} Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.597401 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc"] Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.597427 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" event={"ID":"e4bba62a-9467-4ab0-a05c-84c516e7313a","Type":"ContainerDied","Data":"d5d178a70dd9d8460c9055516c41fc81f9fe039d804c471bd0a04d022bd1a0b9"} Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.597475 5116 scope.go:117] "RemoveContainer" containerID="5a19966bbd7c12775e914eedfc9c7e29fe065b2e1ea56dd7441c743968727ddb" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.597544 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f64f4648c-tg69j" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.597931 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.642969 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f64f4648c-tg69j"] Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.647106 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f64f4648c-tg69j"] Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.672134 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-proxy-ca-bundles\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.672252 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-config\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.672363 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwd95\" (UniqueName: \"kubernetes.io/projected/2cc72e05-3c7d-423c-8000-2afea70742d6-kube-api-access-kwd95\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.672439 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc72e05-3c7d-423c-8000-2afea70742d6-serving-cert\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.672640 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc72e05-3c7d-423c-8000-2afea70742d6-tmp\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.672711 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-client-ca\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.774267 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc72e05-3c7d-423c-8000-2afea70742d6-tmp\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.774330 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-client-ca\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.774389 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-proxy-ca-bundles\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.774419 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-config\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.774477 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwd95\" (UniqueName: \"kubernetes.io/projected/2cc72e05-3c7d-423c-8000-2afea70742d6-kube-api-access-kwd95\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.774511 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc72e05-3c7d-423c-8000-2afea70742d6-serving-cert\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.774934 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc72e05-3c7d-423c-8000-2afea70742d6-tmp\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.776372 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-client-ca\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.776450 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-proxy-ca-bundles\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.776927 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-config\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.783633 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc72e05-3c7d-423c-8000-2afea70742d6-serving-cert\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.799200 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwd95\" (UniqueName: \"kubernetes.io/projected/2cc72e05-3c7d-423c-8000-2afea70742d6-kube-api-access-kwd95\") pod \"controller-manager-5d8bcbcb4d-xbd9k\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:21 crc kubenswrapper[5116]: I1208 17:45:21.930226 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:22 crc kubenswrapper[5116]: I1208 17:45:22.172154 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k"] Dec 08 17:45:22 crc kubenswrapper[5116]: I1208 17:45:22.186927 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" event={"ID":"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d","Type":"ContainerStarted","Data":"3333c75c96c7e3ed099b59abf0d6eb658e9fc0fe88e20459aca7c994c184c1a7"} Dec 08 17:45:22 crc kubenswrapper[5116]: I1208 17:45:22.187304 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:22 crc kubenswrapper[5116]: I1208 17:45:22.204896 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" podStartSLOduration=5.204879843 podStartE2EDuration="5.204879843s" podCreationTimestamp="2025-12-08 17:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:22.201335438 +0000 UTC m=+191.998458672" watchObservedRunningTime="2025-12-08 17:45:22.204879843 +0000 UTC m=+192.002003077" Dec 08 17:45:22 crc kubenswrapper[5116]: I1208 17:45:22.623341 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:22 crc kubenswrapper[5116]: I1208 17:45:22.688433 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bba62a-9467-4ab0-a05c-84c516e7313a" path="/var/lib/kubelet/pods/e4bba62a-9467-4ab0-a05c-84c516e7313a/volumes" Dec 08 17:45:23 crc kubenswrapper[5116]: I1208 17:45:23.196049 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" event={"ID":"2cc72e05-3c7d-423c-8000-2afea70742d6","Type":"ContainerStarted","Data":"e8960cf1c2418e9ebea18984abc84e29334ffd337a033aa320422e9935c19702"} Dec 08 17:45:23 crc kubenswrapper[5116]: I1208 17:45:23.196401 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" event={"ID":"2cc72e05-3c7d-423c-8000-2afea70742d6","Type":"ContainerStarted","Data":"471cc5047a7725cd04fcf7fe4eec450f38ecbb4d5f9970efebee7f6d9797fe3e"} Dec 08 17:45:23 crc kubenswrapper[5116]: I1208 17:45:23.196499 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:23 crc kubenswrapper[5116]: I1208 17:45:23.235181 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" podStartSLOduration=6.235158141 podStartE2EDuration="6.235158141s" podCreationTimestamp="2025-12-08 17:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:23.211213522 +0000 UTC m=+193.008336776" watchObservedRunningTime="2025-12-08 17:45:23.235158141 +0000 UTC m=+193.032281375" Dec 08 17:45:23 crc kubenswrapper[5116]: I1208 17:45:23.611820 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:24 crc kubenswrapper[5116]: I1208 17:45:24.186193 5116 ???:1] "http: TLS handshake error from 192.168.126.11:58786: no serving certificate available for the kubelet" Dec 08 17:45:36 crc kubenswrapper[5116]: I1208 17:45:36.702487 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" podUID="b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" containerName="oauth-openshift" containerID="cri-o://504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90" gracePeriod=15 Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.024046 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k"] Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.024964 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" podUID="2cc72e05-3c7d-423c-8000-2afea70742d6" containerName="controller-manager" containerID="cri-o://e8960cf1c2418e9ebea18984abc84e29334ffd337a033aa320422e9935c19702" gracePeriod=30 Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.051445 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc"] Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.051828 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" podUID="dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" containerName="route-controller-manager" containerID="cri-o://3333c75c96c7e3ed099b59abf0d6eb658e9fc0fe88e20459aca7c994c184c1a7" gracePeriod=30 Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.379318 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496148 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-policies\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496280 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-service-ca\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496363 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-dir\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496404 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-serving-cert\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496460 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gqcp\" (UniqueName: \"kubernetes.io/projected/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-kube-api-access-9gqcp\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496505 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-ocp-branding-template\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496581 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-idp-0-file-data\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496640 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-login\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496721 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-provider-selection\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496804 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-trusted-ca-bundle\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496851 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-cliconfig\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496880 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-error\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.496991 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-session\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.497022 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-router-certs\") pod \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\" (UID: \"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.498332 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.499364 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.500062 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.501048 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.501603 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.507823 5116 generic.go:358] "Generic (PLEG): container finished" podID="dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" containerID="3333c75c96c7e3ed099b59abf0d6eb658e9fc0fe88e20459aca7c994c184c1a7" exitCode=0 Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.508172 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" event={"ID":"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d","Type":"ContainerDied","Data":"3333c75c96c7e3ed099b59abf0d6eb658e9fc0fe88e20459aca7c994c184c1a7"} Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.508645 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79f9d4b4b6-299vf"] Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.509625 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" containerName="oauth-openshift" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.514385 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" containerName="oauth-openshift" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.513328 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.513184 5116 generic.go:358] "Generic (PLEG): container finished" podID="b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" containerID="504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90" exitCode=0 Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.515175 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" containerName="oauth-openshift" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.516799 5116 generic.go:358] "Generic (PLEG): container finished" podID="2cc72e05-3c7d-423c-8000-2afea70742d6" containerID="e8960cf1c2418e9ebea18984abc84e29334ffd337a033aa320422e9935c19702" exitCode=0 Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.522728 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" event={"ID":"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24","Type":"ContainerDied","Data":"504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90"} Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.522917 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-qnwj9" event={"ID":"b8dbe374-c944-4fd4-bb80-8dc26c3e5d24","Type":"ContainerDied","Data":"38830c802e5168f342dea310cd88c936084147f269257e40cd97bc3be840aa81"} Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.523010 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" event={"ID":"2cc72e05-3c7d-423c-8000-2afea70742d6","Type":"ContainerDied","Data":"e8960cf1c2418e9ebea18984abc84e29334ffd337a033aa320422e9935c19702"} Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.523129 5116 scope.go:117] "RemoveContainer" containerID="504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.523507 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.524330 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.534864 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79f9d4b4b6-299vf"] Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.539962 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.543834 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-kube-api-access-9gqcp" (OuterVolumeSpecName: "kube-api-access-9gqcp") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "kube-api-access-9gqcp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.545725 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.547016 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.547947 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.549841 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.550425 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.554584 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" (UID: "b8dbe374-c944-4fd4-bb80-8dc26c3e5d24"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.557497 5116 scope.go:117] "RemoveContainer" containerID="504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90" Dec 08 17:45:37 crc kubenswrapper[5116]: E1208 17:45:37.558056 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90\": container with ID starting with 504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90 not found: ID does not exist" containerID="504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.558089 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90"} err="failed to get container status \"504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90\": rpc error: code = NotFound desc = could not find container \"504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90\": container with ID starting with 504ef900859cd7f7455e38a4b20a8ba233582c5aae6e377748fe22c4bfb29e90 not found: ID does not exist" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599774 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9gqcp\" (UniqueName: \"kubernetes.io/projected/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-kube-api-access-9gqcp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599837 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599869 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599888 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599910 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599935 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599950 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599970 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599983 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.599999 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.600018 5116 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.600030 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.600043 5116 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.600054 5116 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.637937 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.679690 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857"] Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.680282 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" containerName="route-controller-manager" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.680301 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" containerName="route-controller-manager" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.680511 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" containerName="route-controller-manager" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.687337 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701535 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701604 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-error\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701652 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701688 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701715 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92228c7c-86d7-4f7e-9bfe-aaee760a472c-audit-dir\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701745 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701781 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-session\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701805 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-service-ca\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701826 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701849 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6nd\" (UniqueName: \"kubernetes.io/projected/92228c7c-86d7-4f7e-9bfe-aaee760a472c-kube-api-access-dw6nd\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701892 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701957 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-audit-policies\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.701984 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-login\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.702027 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-router-certs\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.704180 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857"] Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.802577 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-serving-cert\") pod \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.802952 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-tmp\") pod \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.803175 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-client-ca\") pod \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.803314 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-config\") pod \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.803415 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kflsb\" (UniqueName: \"kubernetes.io/projected/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-kube-api-access-kflsb\") pod \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\" (UID: \"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d\") " Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.803322 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-tmp" (OuterVolumeSpecName: "tmp") pod "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" (UID: "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.803911 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" (UID: "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.803882 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-config" (OuterVolumeSpecName: "config") pod "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" (UID: "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.804078 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.804228 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-audit-policies\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.804338 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-login\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.804524 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxr4\" (UniqueName: \"kubernetes.io/projected/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-kube-api-access-zqxr4\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.804707 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-router-certs\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.804961 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-client-ca\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805023 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-tmp\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805085 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-config\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805164 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-audit-policies\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805215 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805330 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-error\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805374 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-serving-cert\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805524 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805575 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805605 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92228c7c-86d7-4f7e-9bfe-aaee760a472c-audit-dir\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805642 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805670 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-session\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805673 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92228c7c-86d7-4f7e-9bfe-aaee760a472c-audit-dir\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805698 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-service-ca\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805724 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805748 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6nd\" (UniqueName: \"kubernetes.io/projected/92228c7c-86d7-4f7e-9bfe-aaee760a472c-kube-api-access-dw6nd\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805815 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805833 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.805848 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.806339 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.806620 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.806827 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-service-ca\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.809791 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.809858 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.810236 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-login\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.810559 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" (UID: "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.810877 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-session\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.811391 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-router-certs\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.812336 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.812466 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-error\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.812648 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-kube-api-access-kflsb" (OuterVolumeSpecName: "kube-api-access-kflsb") pod "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" (UID: "dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d"). InnerVolumeSpecName "kube-api-access-kflsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.846040 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6nd\" (UniqueName: \"kubernetes.io/projected/92228c7c-86d7-4f7e-9bfe-aaee760a472c-kube-api-access-dw6nd\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.853662 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92228c7c-86d7-4f7e-9bfe-aaee760a472c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79f9d4b4b6-299vf\" (UID: \"92228c7c-86d7-4f7e-9bfe-aaee760a472c\") " pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.858887 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.911450 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-config\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.911845 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-serving-cert\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.912029 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxr4\" (UniqueName: \"kubernetes.io/projected/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-kube-api-access-zqxr4\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.912171 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-client-ca\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.912313 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-tmp\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.912460 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kflsb\" (UniqueName: \"kubernetes.io/projected/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-kube-api-access-kflsb\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.912563 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.912984 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-config\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.913133 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-tmp\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.913351 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-client-ca\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.918923 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-serving-cert\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.931558 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxr4\" (UniqueName: \"kubernetes.io/projected/7364ede5-dddc-404a-8ec8-8ae1afe2ced3-kube-api-access-zqxr4\") pod \"route-controller-manager-58f48579d7-c8857\" (UID: \"7364ede5-dddc-404a-8ec8-8ae1afe2ced3\") " pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.967921 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qnwj9"] Dec 08 17:45:37 crc kubenswrapper[5116]: I1208 17:45:37.971711 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-qnwj9"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.001379 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.013449 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc72e05-3c7d-423c-8000-2afea70742d6-serving-cert\") pod \"2cc72e05-3c7d-423c-8000-2afea70742d6\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.013495 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc72e05-3c7d-423c-8000-2afea70742d6-tmp\") pod \"2cc72e05-3c7d-423c-8000-2afea70742d6\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.013528 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwd95\" (UniqueName: \"kubernetes.io/projected/2cc72e05-3c7d-423c-8000-2afea70742d6-kube-api-access-kwd95\") pod \"2cc72e05-3c7d-423c-8000-2afea70742d6\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.013564 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-proxy-ca-bundles\") pod \"2cc72e05-3c7d-423c-8000-2afea70742d6\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.013607 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-client-ca\") pod \"2cc72e05-3c7d-423c-8000-2afea70742d6\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.013637 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-config\") pod \"2cc72e05-3c7d-423c-8000-2afea70742d6\" (UID: \"2cc72e05-3c7d-423c-8000-2afea70742d6\") " Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.015191 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-config" (OuterVolumeSpecName: "config") pod "2cc72e05-3c7d-423c-8000-2afea70742d6" (UID: "2cc72e05-3c7d-423c-8000-2afea70742d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.016076 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc72e05-3c7d-423c-8000-2afea70742d6-tmp" (OuterVolumeSpecName: "tmp") pod "2cc72e05-3c7d-423c-8000-2afea70742d6" (UID: "2cc72e05-3c7d-423c-8000-2afea70742d6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.016090 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2cc72e05-3c7d-423c-8000-2afea70742d6" (UID: "2cc72e05-3c7d-423c-8000-2afea70742d6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.016600 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cc72e05-3c7d-423c-8000-2afea70742d6" (UID: "2cc72e05-3c7d-423c-8000-2afea70742d6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.022811 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc72e05-3c7d-423c-8000-2afea70742d6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cc72e05-3c7d-423c-8000-2afea70742d6" (UID: "2cc72e05-3c7d-423c-8000-2afea70742d6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.023327 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc72e05-3c7d-423c-8000-2afea70742d6-kube-api-access-kwd95" (OuterVolumeSpecName: "kube-api-access-kwd95") pod "2cc72e05-3c7d-423c-8000-2afea70742d6" (UID: "2cc72e05-3c7d-423c-8000-2afea70742d6"). InnerVolumeSpecName "kube-api-access-kwd95". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.022835 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.033880 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-766495d899-4wfjn"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.038043 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2cc72e05-3c7d-423c-8000-2afea70742d6" containerName="controller-manager" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.038091 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc72e05-3c7d-423c-8000-2afea70742d6" containerName="controller-manager" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.038279 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="2cc72e05-3c7d-423c-8000-2afea70742d6" containerName="controller-manager" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.048533 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.055352 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-766495d899-4wfjn"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115109 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-serving-cert\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115162 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7pt2\" (UniqueName: \"kubernetes.io/projected/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-kube-api-access-d7pt2\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115212 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-proxy-ca-bundles\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115227 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-tmp\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115272 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-config\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115286 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-client-ca\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115419 5116 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc72e05-3c7d-423c-8000-2afea70742d6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115449 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc72e05-3c7d-423c-8000-2afea70742d6-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115459 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kwd95\" (UniqueName: \"kubernetes.io/projected/2cc72e05-3c7d-423c-8000-2afea70742d6-kube-api-access-kwd95\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115467 5116 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115475 5116 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.115483 5116 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc72e05-3c7d-423c-8000-2afea70742d6-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.216600 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7pt2\" (UniqueName: \"kubernetes.io/projected/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-kube-api-access-d7pt2\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.216667 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-proxy-ca-bundles\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.216710 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-tmp\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.216741 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-config\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.216759 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-client-ca\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.216811 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-serving-cert\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.218573 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-tmp\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.218938 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-proxy-ca-bundles\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.219559 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-client-ca\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.220089 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-config\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.222358 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-serving-cert\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.232907 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7pt2\" (UniqueName: \"kubernetes.io/projected/1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d-kube-api-access-d7pt2\") pod \"controller-manager-766495d899-4wfjn\" (UID: \"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d\") " pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.369315 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79f9d4b4b6-299vf"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.370120 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:38 crc kubenswrapper[5116]: W1208 17:45:38.395505 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92228c7c_86d7_4f7e_9bfe_aaee760a472c.slice/crio-191c78f65771cda3501e637ec4b43e2fe54dc71ce5e708660e1961043006c0be WatchSource:0}: Error finding container 191c78f65771cda3501e637ec4b43e2fe54dc71ce5e708660e1961043006c0be: Status 404 returned error can't find the container with id 191c78f65771cda3501e637ec4b43e2fe54dc71ce5e708660e1961043006c0be Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.466756 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857"] Dec 08 17:45:38 crc kubenswrapper[5116]: W1208 17:45:38.475042 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7364ede5_dddc_404a_8ec8_8ae1afe2ced3.slice/crio-1588b1bcdb6688a29c648a630ae86b16120573145c1bc8ca48f127265ac5172f WatchSource:0}: Error finding container 1588b1bcdb6688a29c648a630ae86b16120573145c1bc8ca48f127265ac5172f: Status 404 returned error can't find the container with id 1588b1bcdb6688a29c648a630ae86b16120573145c1bc8ca48f127265ac5172f Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.550618 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" event={"ID":"92228c7c-86d7-4f7e-9bfe-aaee760a472c","Type":"ContainerStarted","Data":"191c78f65771cda3501e637ec4b43e2fe54dc71ce5e708660e1961043006c0be"} Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.555158 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.555261 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k" event={"ID":"2cc72e05-3c7d-423c-8000-2afea70742d6","Type":"ContainerDied","Data":"471cc5047a7725cd04fcf7fe4eec450f38ecbb4d5f9970efebee7f6d9797fe3e"} Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.555318 5116 scope.go:117] "RemoveContainer" containerID="e8960cf1c2418e9ebea18984abc84e29334ffd337a033aa320422e9935c19702" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.557426 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" event={"ID":"7364ede5-dddc-404a-8ec8-8ae1afe2ced3","Type":"ContainerStarted","Data":"1588b1bcdb6688a29c648a630ae86b16120573145c1bc8ca48f127265ac5172f"} Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.571991 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" event={"ID":"dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d","Type":"ContainerDied","Data":"91f4f13714a8b624a3cf348b63a1d8371fb691834777be0ac89fc91146080ee4"} Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.572150 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.615901 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.622136 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d8bcbcb4d-xbd9k"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.623100 5116 scope.go:117] "RemoveContainer" containerID="3333c75c96c7e3ed099b59abf0d6eb658e9fc0fe88e20459aca7c994c184c1a7" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.632924 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.636354 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-766495d899-4wfjn"] Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.639407 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f94686f44-6f5wc"] Dec 08 17:45:38 crc kubenswrapper[5116]: W1208 17:45:38.642490 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e2dc1a4_c295_4f2b_b167_fdb40f1e6b0d.slice/crio-4e87876347b17e18342895351bc749566571d2aefb31e92e8c21b972c78d3f1d WatchSource:0}: Error finding container 4e87876347b17e18342895351bc749566571d2aefb31e92e8c21b972c78d3f1d: Status 404 returned error can't find the container with id 4e87876347b17e18342895351bc749566571d2aefb31e92e8c21b972c78d3f1d Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.691159 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc72e05-3c7d-423c-8000-2afea70742d6" path="/var/lib/kubelet/pods/2cc72e05-3c7d-423c-8000-2afea70742d6/volumes" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.691869 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8dbe374-c944-4fd4-bb80-8dc26c3e5d24" path="/var/lib/kubelet/pods/b8dbe374-c944-4fd4-bb80-8dc26c3e5d24/volumes" Dec 08 17:45:38 crc kubenswrapper[5116]: I1208 17:45:38.693452 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d" path="/var/lib/kubelet/pods/dfd5a096-7e6b-4fdf-9f61-9b08ec34fe2d/volumes" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.581370 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" event={"ID":"7364ede5-dddc-404a-8ec8-8ae1afe2ced3","Type":"ContainerStarted","Data":"e09d965ca067ae130d40f7d4d422e51233341fa697c26c66c1ee91bdcfbc711c"} Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.581913 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.584703 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" event={"ID":"92228c7c-86d7-4f7e-9bfe-aaee760a472c","Type":"ContainerStarted","Data":"06340576bc9066d92aa32db0645671a9022b16717537978b58e09119c54d0565"} Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.585764 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.593858 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" event={"ID":"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d","Type":"ContainerStarted","Data":"94ed32f3ecefc9e17a4d900f9b1d8eafc812fae10fb361b439204225240c66b0"} Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.594118 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" event={"ID":"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d","Type":"ContainerStarted","Data":"4e87876347b17e18342895351bc749566571d2aefb31e92e8c21b972c78d3f1d"} Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.594320 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.594758 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.601041 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.606857 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58f48579d7-c8857" podStartSLOduration=2.606837447 podStartE2EDuration="2.606837447s" podCreationTimestamp="2025-12-08 17:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:39.60168932 +0000 UTC m=+209.398812554" watchObservedRunningTime="2025-12-08 17:45:39.606837447 +0000 UTC m=+209.403960681" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.623102 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" podStartSLOduration=28.623071496 podStartE2EDuration="28.623071496s" podCreationTimestamp="2025-12-08 17:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:39.62144378 +0000 UTC m=+209.418567014" watchObservedRunningTime="2025-12-08 17:45:39.623071496 +0000 UTC m=+209.420194730" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.643570 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" podStartSLOduration=2.643547837 podStartE2EDuration="2.643547837s" podCreationTimestamp="2025-12-08 17:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:39.64259734 +0000 UTC m=+209.439720574" watchObservedRunningTime="2025-12-08 17:45:39.643547837 +0000 UTC m=+209.440671071" Dec 08 17:45:39 crc kubenswrapper[5116]: I1208 17:45:39.984645 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79f9d4b4b6-299vf" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.391790 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.405109 5116 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.405170 5116 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.406129 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99" gracePeriod=15 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.406190 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.406376 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090" gracePeriod=15 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.406518 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a" gracePeriod=15 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.406599 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144" gracePeriod=15 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.406689 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062" gracePeriod=15 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.407444 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.407503 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.407520 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.407529 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.407538 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408755 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408773 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408780 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408786 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408792 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408802 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408810 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408818 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408824 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408873 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408884 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408890 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.408895 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409316 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409894 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409904 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409913 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409925 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409932 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409940 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409948 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.409978 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.411863 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.412271 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.412306 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.453427 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.478303 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.478357 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.478585 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.478727 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.478864 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.478943 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.478966 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.479192 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.479300 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.479471 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581006 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581068 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581173 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581202 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581236 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581396 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581445 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581514 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581571 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581667 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581716 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581747 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581779 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581850 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581918 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.581992 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.582034 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.582033 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.582860 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.582864 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.614151 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/0.log" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.614236 5116 generic.go:358] "Generic (PLEG): container finished" podID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" containerID="94ed32f3ecefc9e17a4d900f9b1d8eafc812fae10fb361b439204225240c66b0" exitCode=1 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.614432 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" event={"ID":"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d","Type":"ContainerDied","Data":"94ed32f3ecefc9e17a4d900f9b1d8eafc812fae10fb361b439204225240c66b0"} Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.614830 5116 scope.go:117] "RemoveContainer" containerID="94ed32f3ecefc9e17a4d900f9b1d8eafc812fae10fb361b439204225240c66b0" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.615880 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.619130 5116 generic.go:358] "Generic (PLEG): container finished" podID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" containerID="7772951a45e6ef04b031e33a70588eec0efca0038f1943e22bed0c8349f3de95" exitCode=0 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.619219 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c","Type":"ContainerDied","Data":"7772951a45e6ef04b031e33a70588eec0efca0038f1943e22bed0c8349f3de95"} Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.620528 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.621185 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.623232 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.625220 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:41 crc kubenswrapper[5116]: E1208 17:45:41.625216 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events/controller-manager-766495d899-4wfjn.187f4e8b2f118ef3\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-766495d899-4wfjn.187f4e8b2f118ef3 openshift-controller-manager 39230 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-766495d899-4wfjn,UID:1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d,APIVersion:v1,ResourceVersion:39213,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:45:38 +0000 UTC,LastTimestamp:2025-12-08 17:45:41.624348295 +0000 UTC m=+211.421471519,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.626843 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090" exitCode=0 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.626876 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a" exitCode=0 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.626895 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144" exitCode=0 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.626904 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062" exitCode=2 Dec 08 17:45:41 crc kubenswrapper[5116]: I1208 17:45:41.626990 5116 scope.go:117] "RemoveContainer" containerID="0f4e3801c41a7985f11a820bd7e71ba696a24f77bb1468bbcd579c5b5d8a1ba3" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.637919 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.641612 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/0.log" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.641774 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" event={"ID":"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d","Type":"ContainerStarted","Data":"cf948e9cc3862e68eb32b43b793ee1ec04dbba98b6d3026f59d4f0a879f2e4eb"} Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.642044 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.642460 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.643638 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.886648 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.887378 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:42 crc kubenswrapper[5116]: I1208 17:45:42.887743 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.002409 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kube-api-access\") pod \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.002553 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-var-lock\") pod \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.002754 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kubelet-dir\") pod \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\" (UID: \"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c\") " Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.002752 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-var-lock" (OuterVolumeSpecName: "var-lock") pod "e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" (UID: "e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.002894 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" (UID: "e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.003466 5116 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.003523 5116 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.013050 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" (UID: "e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.105515 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.642098 5116 patch_prober.go:28] interesting pod/controller-manager-766495d899-4wfjn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.642680 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.659508 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c","Type":"ContainerDied","Data":"269b3e03945127752b3b619e6ec29d010e6a342e12529a49d1e4d161d5be0853"} Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.659642 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="269b3e03945127752b3b619e6ec29d010e6a342e12529a49d1e4d161d5be0853" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.661112 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.687724 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.688053 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.907767 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.909601 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.910547 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.911319 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:43 crc kubenswrapper[5116]: I1208 17:45:43.912115 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055400 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055554 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055612 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055639 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055742 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055715 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055904 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.055991 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.056188 5116 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.056220 5116 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.056301 5116 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.057334 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.061141 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.158143 5116 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.158214 5116 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.519030 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.519458 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.519827 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.520147 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.520401 5116 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.520426 5116 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.520608 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="200ms" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.659848 5116 patch_prober.go:28] interesting pod/controller-manager-766495d899-4wfjn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.659967 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.670449 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.671299 5116 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99" exitCode=0 Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.671460 5116 scope.go:117] "RemoveContainer" containerID="1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.671466 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.682206 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events/controller-manager-766495d899-4wfjn.187f4e8b2f118ef3\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-766495d899-4wfjn.187f4e8b2f118ef3 openshift-controller-manager 39230 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-766495d899-4wfjn,UID:1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d,APIVersion:v1,ResourceVersion:39213,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:45:38 +0000 UTC,LastTimestamp:2025-12-08 17:45:41.624348295 +0000 UTC m=+211.421471519,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.692623 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.694999 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.695503 5116 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.697002 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.697091 5116 scope.go:117] "RemoveContainer" containerID="3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.716143 5116 scope.go:117] "RemoveContainer" containerID="d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.722235 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="400ms" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.735562 5116 scope.go:117] "RemoveContainer" containerID="8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.752749 5116 scope.go:117] "RemoveContainer" containerID="2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.780409 5116 scope.go:117] "RemoveContainer" containerID="ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.870440 5116 scope.go:117] "RemoveContainer" containerID="1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.871529 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090\": container with ID starting with 1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090 not found: ID does not exist" containerID="1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.871585 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090"} err="failed to get container status \"1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090\": rpc error: code = NotFound desc = could not find container \"1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090\": container with ID starting with 1ff0466282edf9118e55809fff0e52c92acddefc61f507c2c5b4d9268acd5090 not found: ID does not exist" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.871622 5116 scope.go:117] "RemoveContainer" containerID="3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.872398 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a\": container with ID starting with 3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a not found: ID does not exist" containerID="3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.872451 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a"} err="failed to get container status \"3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a\": rpc error: code = NotFound desc = could not find container \"3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a\": container with ID starting with 3a4a41617be531895a63acb8a6cfb03f4bb19970cfb4215f42d7564aec87084a not found: ID does not exist" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.872491 5116 scope.go:117] "RemoveContainer" containerID="d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.874744 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144\": container with ID starting with d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144 not found: ID does not exist" containerID="d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.874782 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144"} err="failed to get container status \"d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144\": rpc error: code = NotFound desc = could not find container \"d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144\": container with ID starting with d1d7f60c242fd483e3d517ae2d01dec2a0372b1ae71c849dea9833973f319144 not found: ID does not exist" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.874802 5116 scope.go:117] "RemoveContainer" containerID="8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.875283 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062\": container with ID starting with 8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062 not found: ID does not exist" containerID="8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.875345 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062"} err="failed to get container status \"8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062\": rpc error: code = NotFound desc = could not find container \"8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062\": container with ID starting with 8a6c4caab224a27d22a9142dc7e952a80974a62ec3a608d0509e3e03b1aca062 not found: ID does not exist" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.875365 5116 scope.go:117] "RemoveContainer" containerID="2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.875934 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99\": container with ID starting with 2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99 not found: ID does not exist" containerID="2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.875966 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99"} err="failed to get container status \"2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99\": rpc error: code = NotFound desc = could not find container \"2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99\": container with ID starting with 2d8111a46c7720fdc55187a62e21d56405c23453eadcb58ce2e60afa9d805d99 not found: ID does not exist" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.875986 5116 scope.go:117] "RemoveContainer" containerID="ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549" Dec 08 17:45:44 crc kubenswrapper[5116]: E1208 17:45:44.876667 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549\": container with ID starting with ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549 not found: ID does not exist" containerID="ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549" Dec 08 17:45:44 crc kubenswrapper[5116]: I1208 17:45:44.876732 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549"} err="failed to get container status \"ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549\": rpc error: code = NotFound desc = could not find container \"ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549\": container with ID starting with ca6d941896a3977ebbfc764c7deeb546d556e330d3ae8a85f1920e3bc996c549 not found: ID does not exist" Dec 08 17:45:45 crc kubenswrapper[5116]: E1208 17:45:45.123703 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="800ms" Dec 08 17:45:45 crc kubenswrapper[5116]: E1208 17:45:45.924686 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="1.6s" Dec 08 17:45:46 crc kubenswrapper[5116]: E1208 17:45:46.455467 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:46 crc kubenswrapper[5116]: I1208 17:45:46.456042 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:46 crc kubenswrapper[5116]: W1208 17:45:46.502375 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-8dae682854eb60e684797dcdd905059a5f428f5bf87bff7b3f26c39ba56daed0 WatchSource:0}: Error finding container 8dae682854eb60e684797dcdd905059a5f428f5bf87bff7b3f26c39ba56daed0: Status 404 returned error can't find the container with id 8dae682854eb60e684797dcdd905059a5f428f5bf87bff7b3f26c39ba56daed0 Dec 08 17:45:46 crc kubenswrapper[5116]: I1208 17:45:46.696556 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"8dae682854eb60e684797dcdd905059a5f428f5bf87bff7b3f26c39ba56daed0"} Dec 08 17:45:47 crc kubenswrapper[5116]: E1208 17:45:47.525619 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="3.2s" Dec 08 17:45:47 crc kubenswrapper[5116]: I1208 17:45:47.710168 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65"} Dec 08 17:45:47 crc kubenswrapper[5116]: I1208 17:45:47.710474 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:47 crc kubenswrapper[5116]: I1208 17:45:47.710980 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:47 crc kubenswrapper[5116]: E1208 17:45:47.711254 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:47 crc kubenswrapper[5116]: I1208 17:45:47.711938 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:48 crc kubenswrapper[5116]: I1208 17:45:48.716948 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:48 crc kubenswrapper[5116]: E1208 17:45:48.718025 5116 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:50 crc kubenswrapper[5116]: I1208 17:45:50.691222 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:50 crc kubenswrapper[5116]: I1208 17:45:50.691645 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:50 crc kubenswrapper[5116]: E1208 17:45:50.726856 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="6.4s" Dec 08 17:45:53 crc kubenswrapper[5116]: I1208 17:45:53.760136 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:45:53 crc kubenswrapper[5116]: I1208 17:45:53.760209 5116 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="dbd177b43687887cb390c8c11a09d2c831ab72e0cd7faa9ffbf86ab90e577e90" exitCode=1 Dec 08 17:45:53 crc kubenswrapper[5116]: I1208 17:45:53.760450 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"dbd177b43687887cb390c8c11a09d2c831ab72e0cd7faa9ffbf86ab90e577e90"} Dec 08 17:45:53 crc kubenswrapper[5116]: I1208 17:45:53.761347 5116 scope.go:117] "RemoveContainer" containerID="dbd177b43687887cb390c8c11a09d2c831ab72e0cd7faa9ffbf86ab90e577e90" Dec 08 17:45:53 crc kubenswrapper[5116]: I1208 17:45:53.762325 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5116]: I1208 17:45:53.762962 5116 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:53 crc kubenswrapper[5116]: I1208 17:45:53.763602 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:54 crc kubenswrapper[5116]: I1208 17:45:54.660968 5116 patch_prober.go:28] interesting pod/controller-manager-766495d899-4wfjn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 17:45:54 crc kubenswrapper[5116]: I1208 17:45:54.661419 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" Dec 08 17:45:54 crc kubenswrapper[5116]: E1208 17:45:54.683556 5116 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events/controller-manager-766495d899-4wfjn.187f4e8b2f118ef3\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-766495d899-4wfjn.187f4e8b2f118ef3 openshift-controller-manager 39230 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-766495d899-4wfjn,UID:1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d,APIVersion:v1,ResourceVersion:39213,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:45:38 +0000 UTC,LastTimestamp:2025-12-08 17:45:41.624348295 +0000 UTC m=+211.421471519,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:45:54 crc kubenswrapper[5116]: I1208 17:45:54.772211 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:45:54 crc kubenswrapper[5116]: I1208 17:45:54.772434 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8976ef9332aad83cf0e5d36d15b802cd5ab1475019c29a3647527aeb31e03055"} Dec 08 17:45:54 crc kubenswrapper[5116]: I1208 17:45:54.774285 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:54 crc kubenswrapper[5116]: I1208 17:45:54.775011 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:54 crc kubenswrapper[5116]: I1208 17:45:54.775350 5116 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.679372 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.680448 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.681342 5116 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.681826 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.696202 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.696263 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:45:55 crc kubenswrapper[5116]: E1208 17:45:55.696782 5116 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.697035 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:55 crc kubenswrapper[5116]: W1208 17:45:55.717218 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-5bbd39a63084f7baf8bdb6a30343a735ca6d223c1064ff5f9a8b69f495fcde7a WatchSource:0}: Error finding container 5bbd39a63084f7baf8bdb6a30343a735ca6d223c1064ff5f9a8b69f495fcde7a: Status 404 returned error can't find the container with id 5bbd39a63084f7baf8bdb6a30343a735ca6d223c1064ff5f9a8b69f495fcde7a Dec 08 17:45:55 crc kubenswrapper[5116]: I1208 17:45:55.781575 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5bbd39a63084f7baf8bdb6a30343a735ca6d223c1064ff5f9a8b69f495fcde7a"} Dec 08 17:45:56 crc kubenswrapper[5116]: I1208 17:45:56.788266 5116 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="44fe8c6a71fa848469702faf7574e8a77242f703b5a7d2af7c5ff51b8872e223" exitCode=0 Dec 08 17:45:56 crc kubenswrapper[5116]: I1208 17:45:56.788614 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"44fe8c6a71fa848469702faf7574e8a77242f703b5a7d2af7c5ff51b8872e223"} Dec 08 17:45:56 crc kubenswrapper[5116]: I1208 17:45:56.788904 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:45:56 crc kubenswrapper[5116]: I1208 17:45:56.789008 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:45:56 crc kubenswrapper[5116]: E1208 17:45:56.789437 5116 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:56 crc kubenswrapper[5116]: I1208 17:45:56.789441 5116 status_manager.go:895] "Failed to get status for pod" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:56 crc kubenswrapper[5116]: I1208 17:45:56.789833 5116 status_manager.go:895] "Failed to get status for pod" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-766495d899-4wfjn\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:56 crc kubenswrapper[5116]: I1208 17:45:56.790052 5116 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Dec 08 17:45:57 crc kubenswrapper[5116]: E1208 17:45:57.127896 5116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="7s" Dec 08 17:45:57 crc kubenswrapper[5116]: I1208 17:45:57.830119 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4011d11a2e845ef857d7b79ccc5bd64cb216248c1ab20cc1c8871e433c1ce016"} Dec 08 17:45:57 crc kubenswrapper[5116]: I1208 17:45:57.830562 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e5c28bfd891e32cd666b04f75df40dbaa4d83a92642f2d572f782376babb2c2a"} Dec 08 17:45:57 crc kubenswrapper[5116]: I1208 17:45:57.830580 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"117b22fab68810607476199397b458a069c18ea571ca5ba4587fd9b3bddfd04f"} Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.261252 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.269527 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.839401 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"04b986c288f7c84af4212da4a02ee2046e7e99ff8e02276a14c882c19ec37b11"} Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.839722 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.839907 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.839995 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.840053 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"35f00999c35b9b090825ee8bb2ba2c4c7f86e9c50329870cc8128241c9dd68e0"} Dec 08 17:45:58 crc kubenswrapper[5116]: I1208 17:45:58.840085 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:00 crc kubenswrapper[5116]: I1208 17:46:00.697371 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:00 crc kubenswrapper[5116]: I1208 17:46:00.697430 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:00 crc kubenswrapper[5116]: I1208 17:46:00.703417 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.334739 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.335120 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.853330 5116 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.853364 5116 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.882121 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.882156 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.886749 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:03 crc kubenswrapper[5116]: I1208 17:46:03.889547 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="1ac47f8f-9722-4978-86e1-f1a2616274ca" Dec 08 17:46:04 crc kubenswrapper[5116]: I1208 17:46:04.661330 5116 patch_prober.go:28] interesting pod/controller-manager-766495d899-4wfjn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:46:04 crc kubenswrapper[5116]: I1208 17:46:04.661483 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:46:04 crc kubenswrapper[5116]: I1208 17:46:04.887440 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:46:04 crc kubenswrapper[5116]: I1208 17:46:04.887475 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:46:09 crc kubenswrapper[5116]: I1208 17:46:09.850978 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:10 crc kubenswrapper[5116]: I1208 17:46:10.705222 5116 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="1ac47f8f-9722-4978-86e1-f1a2616274ca" Dec 08 17:46:12 crc kubenswrapper[5116]: I1208 17:46:12.951206 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:46:12 crc kubenswrapper[5116]: I1208 17:46:12.952317 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/0.log" Dec 08 17:46:12 crc kubenswrapper[5116]: I1208 17:46:12.952360 5116 generic.go:358] "Generic (PLEG): container finished" podID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" containerID="cf948e9cc3862e68eb32b43b793ee1ec04dbba98b6d3026f59d4f0a879f2e4eb" exitCode=255 Dec 08 17:46:12 crc kubenswrapper[5116]: I1208 17:46:12.952403 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" event={"ID":"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d","Type":"ContainerDied","Data":"cf948e9cc3862e68eb32b43b793ee1ec04dbba98b6d3026f59d4f0a879f2e4eb"} Dec 08 17:46:12 crc kubenswrapper[5116]: I1208 17:46:12.952468 5116 scope.go:117] "RemoveContainer" containerID="94ed32f3ecefc9e17a4d900f9b1d8eafc812fae10fb361b439204225240c66b0" Dec 08 17:46:12 crc kubenswrapper[5116]: I1208 17:46:12.953286 5116 scope.go:117] "RemoveContainer" containerID="cf948e9cc3862e68eb32b43b793ee1ec04dbba98b6d3026f59d4f0a879f2e4eb" Dec 08 17:46:12 crc kubenswrapper[5116]: E1208 17:46:12.953764 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-766495d899-4wfjn_openshift-controller-manager(1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d)\"" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.533704 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.789799 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.860749 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.952559 5116 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.959771 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.959851 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.960268 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.960449 5116 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.960483 5116 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="189e0ebf-9023-4b40-8604-9b4c2dab2104" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.964819 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.988399 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=10.988357217 podStartE2EDuration="10.988357217s" podCreationTimestamp="2025-12-08 17:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:46:13.979701572 +0000 UTC m=+243.776824816" watchObservedRunningTime="2025-12-08 17:46:13.988357217 +0000 UTC m=+243.785480471" Dec 08 17:46:13 crc kubenswrapper[5116]: I1208 17:46:13.999278 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.215158 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.264662 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.541326 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.613451 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.622864 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.693062 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.706397 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.795956 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:14 crc kubenswrapper[5116]: I1208 17:46:14.957426 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.021039 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.092743 5116 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.093305 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65" gracePeriod=5 Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.105489 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.176432 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.271031 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.521854 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.544565 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.593289 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.626162 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:46:15 crc kubenswrapper[5116]: I1208 17:46:15.638692 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.017337 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.054990 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.188420 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.388141 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.749740 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.899651 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.940681 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:46:16 crc kubenswrapper[5116]: I1208 17:46:16.975138 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.127784 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.205570 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.343685 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.377760 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.402487 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.432737 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.446547 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.461073 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.569642 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.675298 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.690616 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.732666 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:46:17 crc kubenswrapper[5116]: I1208 17:46:17.862821 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.002970 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.083005 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.188226 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.354898 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.370853 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.371544 5116 scope.go:117] "RemoveContainer" containerID="cf948e9cc3862e68eb32b43b793ee1ec04dbba98b6d3026f59d4f0a879f2e4eb" Dec 08 17:46:18 crc kubenswrapper[5116]: E1208 17:46:18.371934 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-766495d899-4wfjn_openshift-controller-manager(1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d)\"" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" podUID="1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.383097 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.438117 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.457841 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.527621 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.538031 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.560523 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.631900 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.931479 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.962334 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:46:18 crc kubenswrapper[5116]: I1208 17:46:18.989259 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.063323 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.084607 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.195606 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.299481 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.369009 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.370661 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.370890 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.456612 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.463189 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.566230 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.615351 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.641457 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.646701 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.686414 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.742229 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.828673 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.920710 5116 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.933186 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:46:19 crc kubenswrapper[5116]: I1208 17:46:19.976765 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.056056 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.082952 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.104195 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.116219 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.150967 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.240711 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.240787 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.242697 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.263669 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.283590 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.290875 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.295513 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.312377 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.320233 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.323886 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.323944 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.323969 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324027 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324070 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324114 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324187 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324194 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324364 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324452 5116 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324471 5116 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324484 5116 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.324499 5116 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.329948 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.334239 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.368956 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.425560 5116 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.427145 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.428382 5116 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.445042 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.464643 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.471542 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.530032 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.686890 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.689912 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.722771 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.878084 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.902961 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.904949 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:20 crc kubenswrapper[5116]: I1208 17:46:20.925916 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.003332 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.003411 5116 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65" exitCode=137 Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.003502 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.003571 5116 scope.go:117] "RemoveContainer" containerID="987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.007223 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.009667 5116 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.024403 5116 scope.go:117] "RemoveContainer" containerID="987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65" Dec 08 17:46:21 crc kubenswrapper[5116]: E1208 17:46:21.024936 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65\": container with ID starting with 987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65 not found: ID does not exist" containerID="987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.025028 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65"} err="failed to get container status \"987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65\": rpc error: code = NotFound desc = could not find container \"987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65\": container with ID starting with 987ad6f6b3d67873d50cfb78891e815b593e768bd470dba428de84b8bdac4f65 not found: ID does not exist" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.066599 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.108841 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.159650 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.169113 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.179662 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.195926 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.291846 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.355674 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.381429 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.482928 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.542354 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.735568 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.844938 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.924177 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.927730 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:46:21 crc kubenswrapper[5116]: I1208 17:46:21.949411 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.052862 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.068656 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.094147 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.108285 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.121716 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.127848 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.158031 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.172030 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.269075 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.273089 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.392459 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.414495 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.502859 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.525598 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.555282 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.580756 5116 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.580852 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.623867 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.785968 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.818099 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.930420 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.943670 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:46:22 crc kubenswrapper[5116]: I1208 17:46:22.981167 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.154447 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.163772 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.255698 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.264005 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.323590 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.324639 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.416108 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.543505 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.626502 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.728592 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.748848 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.831878 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.899123 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.948804 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.956647 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:46:23 crc kubenswrapper[5116]: I1208 17:46:23.992766 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.024598 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.133070 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.141023 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.180649 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.208082 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.272361 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.305281 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.337736 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.357650 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.358224 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.391479 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.465033 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.493557 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.527636 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.533534 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.540118 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.544322 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.560177 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.570577 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.630821 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.808075 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.884346 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.941979 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.975279 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:24 crc kubenswrapper[5116]: I1208 17:46:24.981098 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.106435 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.278509 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.293680 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.336445 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.348903 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.446313 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.471993 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.486713 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.582665 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.584311 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.652045 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.770662 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.771707 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.806908 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.844537 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.876461 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.987511 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.987878 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:46:25 crc kubenswrapper[5116]: I1208 17:46:25.993891 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.065215 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.168771 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.171526 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.172843 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.369861 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.404086 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.434465 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.469760 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.541479 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.573982 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.601459 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.609992 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.782738 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.866014 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.972494 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:46:26 crc kubenswrapper[5116]: I1208 17:46:26.983331 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.028914 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.032109 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.087186 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.132449 5116 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.159550 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.240690 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.270148 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.273567 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.402304 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.464167 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.480815 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.523471 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.599419 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.621345 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.894779 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:46:27 crc kubenswrapper[5116]: I1208 17:46:27.982211 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.015664 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.091314 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.113189 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.263811 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.435591 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.465953 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.467064 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.691519 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.704049 5116 ???:1] "http: TLS handshake error from 192.168.126.11:58266: no serving certificate available for the kubelet" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.787743 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:46:28 crc kubenswrapper[5116]: I1208 17:46:28.806668 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:46:29 crc kubenswrapper[5116]: I1208 17:46:29.118137 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:46:29 crc kubenswrapper[5116]: I1208 17:46:29.181204 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:46:29 crc kubenswrapper[5116]: I1208 17:46:29.479162 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:46:29 crc kubenswrapper[5116]: I1208 17:46:29.548896 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:46:29 crc kubenswrapper[5116]: I1208 17:46:29.557831 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:46:29 crc kubenswrapper[5116]: I1208 17:46:29.596290 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:46:29 crc kubenswrapper[5116]: I1208 17:46:29.876598 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:46:30 crc kubenswrapper[5116]: I1208 17:46:30.075824 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:46:33 crc kubenswrapper[5116]: I1208 17:46:33.336679 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:46:33 crc kubenswrapper[5116]: I1208 17:46:33.336762 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:46:33 crc kubenswrapper[5116]: I1208 17:46:33.679981 5116 scope.go:117] "RemoveContainer" containerID="cf948e9cc3862e68eb32b43b793ee1ec04dbba98b6d3026f59d4f0a879f2e4eb" Dec 08 17:46:34 crc kubenswrapper[5116]: I1208 17:46:34.138052 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:46:34 crc kubenswrapper[5116]: I1208 17:46:34.138703 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" event={"ID":"1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d","Type":"ContainerStarted","Data":"7276058d423708e295fed1e8797d431baefdfb8452c187aaed312c4520d8aec4"} Dec 08 17:46:34 crc kubenswrapper[5116]: I1208 17:46:34.139276 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:46:34 crc kubenswrapper[5116]: I1208 17:46:34.760101 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-766495d899-4wfjn" Dec 08 17:46:45 crc kubenswrapper[5116]: I1208 17:46:45.541808 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:46:46 crc kubenswrapper[5116]: I1208 17:46:46.145534 5116 ???:1] "http: TLS handshake error from 192.168.126.11:34336: no serving certificate available for the kubelet" Dec 08 17:46:53 crc kubenswrapper[5116]: I1208 17:46:53.682226 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:47:00 crc kubenswrapper[5116]: I1208 17:47:00.544207 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:47:03 crc kubenswrapper[5116]: I1208 17:47:03.335877 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:47:03 crc kubenswrapper[5116]: I1208 17:47:03.336474 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:47:03 crc kubenswrapper[5116]: I1208 17:47:03.336555 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:47:03 crc kubenswrapper[5116]: I1208 17:47:03.337362 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"013afc9b2137a670c234a5ed56a7fe32904cb1f1413dc085edcb58fd24608faa"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:47:03 crc kubenswrapper[5116]: I1208 17:47:03.337478 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://013afc9b2137a670c234a5ed56a7fe32904cb1f1413dc085edcb58fd24608faa" gracePeriod=600 Dec 08 17:47:04 crc kubenswrapper[5116]: I1208 17:47:04.333431 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="013afc9b2137a670c234a5ed56a7fe32904cb1f1413dc085edcb58fd24608faa" exitCode=0 Dec 08 17:47:04 crc kubenswrapper[5116]: I1208 17:47:04.333580 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"013afc9b2137a670c234a5ed56a7fe32904cb1f1413dc085edcb58fd24608faa"} Dec 08 17:47:04 crc kubenswrapper[5116]: I1208 17:47:04.334344 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"1a91b9acb99507a3f905fce585b6049ec73b8aafa949cf4edc05cfa9067a094d"} Dec 08 17:47:04 crc kubenswrapper[5116]: I1208 17:47:04.550782 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:47:10 crc kubenswrapper[5116]: I1208 17:47:10.796932 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:47:10 crc kubenswrapper[5116]: I1208 17:47:10.797922 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:47:10 crc kubenswrapper[5116]: I1208 17:47:10.881993 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:47:10 crc kubenswrapper[5116]: I1208 17:47:10.923003 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:47:49 crc kubenswrapper[5116]: I1208 17:47:49.363660 5116 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.501470 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bnq4b"] Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.502537 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bnq4b" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="registry-server" containerID="cri-o://215fefac8458ae8270ff609404abd09f83a6bbb6ff9566788cc0593c35b8a6a5" gracePeriod=30 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.509508 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc4fk"] Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.509895 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nc4fk" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="registry-server" containerID="cri-o://583a5e3fce9b33b3dc2f2446f5b35c3e3da2fad48429f93c4fa340739dcc6f82" gracePeriod=30 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.525557 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w2582"] Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.525898 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" podUID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" containerName="marketplace-operator" containerID="cri-o://0d7964f58f360bfe2dc6d0c956acb14a6d893157d00764fca012749e5c5dd7ba" gracePeriod=30 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.533302 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pqqp"] Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.537306 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4pqqp" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="registry-server" containerID="cri-o://1d87b3dfeb1c61b3b1e8332b268402ea1366d39a04a2d3c5c79986b2a82844d7" gracePeriod=30 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.537442 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j2rd"] Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.537714 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7j2rd" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="registry-server" containerID="cri-o://d1d9eaee655c81adb5ab40ccf7e4a7aaa9a5293ffd7d5cdfbdf7da45c738cdf1" gracePeriod=30 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.549363 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hzv5p"] Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.550118 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" containerName="installer" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.550153 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" containerName="installer" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.550164 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.550172 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.550332 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.550360 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="e28dd1b1-e2ad-4e02-a1cc-90f8e0f93a6c" containerName="installer" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.576103 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hzv5p"] Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.576404 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.663272 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsd8\" (UniqueName: \"kubernetes.io/projected/f4282342-727a-4e77-9202-744186310c82-kube-api-access-5fsd8\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.663385 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4282342-727a-4e77-9202-744186310c82-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.663457 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4282342-727a-4e77-9202-744186310c82-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.663485 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4282342-727a-4e77-9202-744186310c82-tmp\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.764312 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4282342-727a-4e77-9202-744186310c82-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.764370 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4282342-727a-4e77-9202-744186310c82-tmp\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.764430 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fsd8\" (UniqueName: \"kubernetes.io/projected/f4282342-727a-4e77-9202-744186310c82-kube-api-access-5fsd8\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.764477 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4282342-727a-4e77-9202-744186310c82-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.765459 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4282342-727a-4e77-9202-744186310c82-tmp\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.766878 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4282342-727a-4e77-9202-744186310c82-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.785950 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4282342-727a-4e77-9202-744186310c82-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.792445 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fsd8\" (UniqueName: \"kubernetes.io/projected/f4282342-727a-4e77-9202-744186310c82-kube-api-access-5fsd8\") pod \"marketplace-operator-547dbd544d-hzv5p\" (UID: \"f4282342-727a-4e77-9202-744186310c82\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.906674 5116 generic.go:358] "Generic (PLEG): container finished" podID="088af58f-5679-42e6-9595-945ee162f862" containerID="d1d9eaee655c81adb5ab40ccf7e4a7aaa9a5293ffd7d5cdfbdf7da45c738cdf1" exitCode=0 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.906848 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2rd" event={"ID":"088af58f-5679-42e6-9595-945ee162f862","Type":"ContainerDied","Data":"d1d9eaee655c81adb5ab40ccf7e4a7aaa9a5293ffd7d5cdfbdf7da45c738cdf1"} Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.955353 5116 generic.go:358] "Generic (PLEG): container finished" podID="ab873de1-8a57-4411-a552-1567537bdc67" containerID="1d87b3dfeb1c61b3b1e8332b268402ea1366d39a04a2d3c5c79986b2a82844d7" exitCode=0 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.955549 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pqqp" event={"ID":"ab873de1-8a57-4411-a552-1567537bdc67","Type":"ContainerDied","Data":"1d87b3dfeb1c61b3b1e8332b268402ea1366d39a04a2d3c5c79986b2a82844d7"} Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.974185 5116 generic.go:358] "Generic (PLEG): container finished" podID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" containerID="0d7964f58f360bfe2dc6d0c956acb14a6d893157d00764fca012749e5c5dd7ba" exitCode=0 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.974420 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" event={"ID":"5471dfd3-e36e-405a-a517-2c1e2bc10e62","Type":"ContainerDied","Data":"0d7964f58f360bfe2dc6d0c956acb14a6d893157d00764fca012749e5c5dd7ba"} Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.987886 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.997557 5116 generic.go:358] "Generic (PLEG): container finished" podID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerID="215fefac8458ae8270ff609404abd09f83a6bbb6ff9566788cc0593c35b8a6a5" exitCode=0 Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.997708 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnq4b" event={"ID":"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5","Type":"ContainerDied","Data":"215fefac8458ae8270ff609404abd09f83a6bbb6ff9566788cc0593c35b8a6a5"} Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.997745 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnq4b" event={"ID":"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5","Type":"ContainerDied","Data":"ed9f93f8c2ede5200d28f602ca02ae5362c701e12611cae4369015a2d367caab"} Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.997758 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed9f93f8c2ede5200d28f602ca02ae5362c701e12611cae4369015a2d367caab" Dec 08 17:48:14 crc kubenswrapper[5116]: I1208 17:48:14.998983 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.000612 5116 generic.go:358] "Generic (PLEG): container finished" podID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerID="583a5e3fce9b33b3dc2f2446f5b35c3e3da2fad48429f93c4fa340739dcc6f82" exitCode=0 Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.000774 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc4fk" event={"ID":"b15bd0e2-4143-436c-8dc2-0fc2e33cef62","Type":"ContainerDied","Data":"583a5e3fce9b33b3dc2f2446f5b35c3e3da2fad48429f93c4fa340739dcc6f82"} Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.031762 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.039679 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.045480 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.060772 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172140 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7cqj\" (UniqueName: \"kubernetes.io/projected/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-kube-api-access-t7cqj\") pod \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172289 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltrx4\" (UniqueName: \"kubernetes.io/projected/5471dfd3-e36e-405a-a517-2c1e2bc10e62-kube-api-access-ltrx4\") pod \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172340 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-catalog-content\") pod \"ab873de1-8a57-4411-a552-1567537bdc67\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172366 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-utilities\") pod \"088af58f-5679-42e6-9595-945ee162f862\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172412 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5471dfd3-e36e-405a-a517-2c1e2bc10e62-tmp\") pod \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172448 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-utilities\") pod \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172475 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-catalog-content\") pod \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172498 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2wct\" (UniqueName: \"kubernetes.io/projected/ab873de1-8a57-4411-a552-1567537bdc67-kube-api-access-m2wct\") pod \"ab873de1-8a57-4411-a552-1567537bdc67\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172590 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-utilities\") pod \"ab873de1-8a57-4411-a552-1567537bdc67\" (UID: \"ab873de1-8a57-4411-a552-1567537bdc67\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172625 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hspkc\" (UniqueName: \"kubernetes.io/projected/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-kube-api-access-hspkc\") pod \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\" (UID: \"b15bd0e2-4143-436c-8dc2-0fc2e33cef62\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172685 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-utilities\") pod \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172736 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-catalog-content\") pod \"088af58f-5679-42e6-9595-945ee162f862\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172820 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-operator-metrics\") pod \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172851 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4rcp\" (UniqueName: \"kubernetes.io/projected/088af58f-5679-42e6-9595-945ee162f862-kube-api-access-n4rcp\") pod \"088af58f-5679-42e6-9595-945ee162f862\" (UID: \"088af58f-5679-42e6-9595-945ee162f862\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172873 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-trusted-ca\") pod \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\" (UID: \"5471dfd3-e36e-405a-a517-2c1e2bc10e62\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.172941 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-catalog-content\") pod \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\" (UID: \"46bc2c2e-fed2-4cf1-afc1-2fb750553bc5\") " Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.176797 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5471dfd3-e36e-405a-a517-2c1e2bc10e62-tmp" (OuterVolumeSpecName: "tmp") pod "5471dfd3-e36e-405a-a517-2c1e2bc10e62" (UID: "5471dfd3-e36e-405a-a517-2c1e2bc10e62"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.177833 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5471dfd3-e36e-405a-a517-2c1e2bc10e62" (UID: "5471dfd3-e36e-405a-a517-2c1e2bc10e62"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.182627 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-kube-api-access-hspkc" (OuterVolumeSpecName: "kube-api-access-hspkc") pod "b15bd0e2-4143-436c-8dc2-0fc2e33cef62" (UID: "b15bd0e2-4143-436c-8dc2-0fc2e33cef62"). InnerVolumeSpecName "kube-api-access-hspkc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.183152 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-utilities" (OuterVolumeSpecName: "utilities") pod "46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" (UID: "46bc2c2e-fed2-4cf1-afc1-2fb750553bc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.183271 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5471dfd3-e36e-405a-a517-2c1e2bc10e62-kube-api-access-ltrx4" (OuterVolumeSpecName: "kube-api-access-ltrx4") pod "5471dfd3-e36e-405a-a517-2c1e2bc10e62" (UID: "5471dfd3-e36e-405a-a517-2c1e2bc10e62"). InnerVolumeSpecName "kube-api-access-ltrx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.184963 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088af58f-5679-42e6-9595-945ee162f862-kube-api-access-n4rcp" (OuterVolumeSpecName: "kube-api-access-n4rcp") pod "088af58f-5679-42e6-9595-945ee162f862" (UID: "088af58f-5679-42e6-9595-945ee162f862"). InnerVolumeSpecName "kube-api-access-n4rcp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.188880 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5471dfd3-e36e-405a-a517-2c1e2bc10e62" (UID: "5471dfd3-e36e-405a-a517-2c1e2bc10e62"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.189051 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-utilities" (OuterVolumeSpecName: "utilities") pod "b15bd0e2-4143-436c-8dc2-0fc2e33cef62" (UID: "b15bd0e2-4143-436c-8dc2-0fc2e33cef62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.189527 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-utilities" (OuterVolumeSpecName: "utilities") pod "088af58f-5679-42e6-9595-945ee162f862" (UID: "088af58f-5679-42e6-9595-945ee162f862"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.198091 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab873de1-8a57-4411-a552-1567537bdc67" (UID: "ab873de1-8a57-4411-a552-1567537bdc67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.201376 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-kube-api-access-t7cqj" (OuterVolumeSpecName: "kube-api-access-t7cqj") pod "46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" (UID: "46bc2c2e-fed2-4cf1-afc1-2fb750553bc5"). InnerVolumeSpecName "kube-api-access-t7cqj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.201966 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab873de1-8a57-4411-a552-1567537bdc67-kube-api-access-m2wct" (OuterVolumeSpecName: "kube-api-access-m2wct") pod "ab873de1-8a57-4411-a552-1567537bdc67" (UID: "ab873de1-8a57-4411-a552-1567537bdc67"). InnerVolumeSpecName "kube-api-access-m2wct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.213889 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-utilities" (OuterVolumeSpecName: "utilities") pod "ab873de1-8a57-4411-a552-1567537bdc67" (UID: "ab873de1-8a57-4411-a552-1567537bdc67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.223494 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" (UID: "46bc2c2e-fed2-4cf1-afc1-2fb750553bc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275685 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t7cqj\" (UniqueName: \"kubernetes.io/projected/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-kube-api-access-t7cqj\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275776 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ltrx4\" (UniqueName: \"kubernetes.io/projected/5471dfd3-e36e-405a-a517-2c1e2bc10e62-kube-api-access-ltrx4\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275797 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275818 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275842 5116 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5471dfd3-e36e-405a-a517-2c1e2bc10e62-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275864 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275883 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m2wct\" (UniqueName: \"kubernetes.io/projected/ab873de1-8a57-4411-a552-1567537bdc67-kube-api-access-m2wct\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275903 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab873de1-8a57-4411-a552-1567537bdc67-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275921 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hspkc\" (UniqueName: \"kubernetes.io/projected/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-kube-api-access-hspkc\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275939 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275957 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275976 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4rcp\" (UniqueName: \"kubernetes.io/projected/088af58f-5679-42e6-9595-945ee162f862-kube-api-access-n4rcp\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.275993 5116 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5471dfd3-e36e-405a-a517-2c1e2bc10e62-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.276010 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.297796 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b15bd0e2-4143-436c-8dc2-0fc2e33cef62" (UID: "b15bd0e2-4143-436c-8dc2-0fc2e33cef62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.322651 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "088af58f-5679-42e6-9595-945ee162f862" (UID: "088af58f-5679-42e6-9595-945ee162f862"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.343210 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hzv5p"] Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.352693 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.377700 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15bd0e2-4143-436c-8dc2-0fc2e33cef62-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:15 crc kubenswrapper[5116]: I1208 17:48:15.377758 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088af58f-5679-42e6-9595-945ee162f862-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.007917 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" event={"ID":"f4282342-727a-4e77-9202-744186310c82","Type":"ContainerStarted","Data":"b092329885857e37f4365de3bad0399cdc0d497c536ca974f459769e5f99b6e7"} Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.008236 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.008267 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" event={"ID":"f4282342-727a-4e77-9202-744186310c82","Type":"ContainerStarted","Data":"0817edd58d7a0772b39ed629d8792d14ebae9f1b43db651eb04a285c0fe8b209"} Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.010631 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2rd" event={"ID":"088af58f-5679-42e6-9595-945ee162f862","Type":"ContainerDied","Data":"704b20e063d9691471af2b4538ea9003a88e5c0001fae349c8709b570cb2b51f"} Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.010687 5116 scope.go:117] "RemoveContainer" containerID="d1d9eaee655c81adb5ab40ccf7e4a7aaa9a5293ffd7d5cdfbdf7da45c738cdf1" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.010713 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2rd" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.012764 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.012803 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pqqp" event={"ID":"ab873de1-8a57-4411-a552-1567537bdc67","Type":"ContainerDied","Data":"dcabe2db418651711c42829dfbb07467b74149f20e829caf060552b3dd24f516"} Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.012936 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pqqp" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.014530 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" event={"ID":"5471dfd3-e36e-405a-a517-2c1e2bc10e62","Type":"ContainerDied","Data":"1cd920464aa0e3e0728f4c877ce4bd49e8b47c7c078685c108d1235ba2f1c301"} Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.014694 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-w2582" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.020507 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnq4b" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.020542 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc4fk" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.020482 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc4fk" event={"ID":"b15bd0e2-4143-436c-8dc2-0fc2e33cef62","Type":"ContainerDied","Data":"1f6bf72b54abbdbe1d4134eea67af919fe431b560afa36523f0b282b919b99d2"} Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.040083 5116 scope.go:117] "RemoveContainer" containerID="7d5bf1c127c46e8a507bd1cd59bee2653a692d969941694d3226047952bca532" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.084855 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-hzv5p" podStartSLOduration=2.084824873 podStartE2EDuration="2.084824873s" podCreationTimestamp="2025-12-08 17:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:48:16.028005308 +0000 UTC m=+365.825128542" watchObservedRunningTime="2025-12-08 17:48:16.084824873 +0000 UTC m=+365.881948107" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.093610 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j2rd"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.104114 5116 scope.go:117] "RemoveContainer" containerID="a55513c692ad2392716a102fd47614098d63521b33abf670fdf908ccf3f4589e" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.104333 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7j2rd"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.112681 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bnq4b"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.116599 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bnq4b"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.121447 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pqqp"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.127117 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pqqp"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.133380 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w2582"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.138450 5116 scope.go:117] "RemoveContainer" containerID="1d87b3dfeb1c61b3b1e8332b268402ea1366d39a04a2d3c5c79986b2a82844d7" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.138958 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w2582"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.142438 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc4fk"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.147740 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nc4fk"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.151607 5116 scope.go:117] "RemoveContainer" containerID="bd22c53a9ce9fad43fbafe56982ecbce82d2148ce1afc9ce88b2b7d772ef52e0" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.167952 5116 scope.go:117] "RemoveContainer" containerID="8b2d943b49c802cc7050d60b7b2e54143ff91f91bd0a0ab0698a920352476dbe" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.190393 5116 scope.go:117] "RemoveContainer" containerID="0d7964f58f360bfe2dc6d0c956acb14a6d893157d00764fca012749e5c5dd7ba" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.211915 5116 scope.go:117] "RemoveContainer" containerID="583a5e3fce9b33b3dc2f2446f5b35c3e3da2fad48429f93c4fa340739dcc6f82" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.228746 5116 scope.go:117] "RemoveContainer" containerID="547e19fbcdf0661c69c1a969bea40e171a346f5c072bb931aa3fbd809cba12d0" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.257965 5116 scope.go:117] "RemoveContainer" containerID="f91d1c99f36744b7f717c0250aa3eee51eeee9fd1e7b770de2cfc6a929796e3b" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.513365 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-68bst"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514570 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514616 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514654 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514667 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514686 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514699 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514716 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514728 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514746 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514760 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514777 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514789 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514802 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514813 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514832 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514843 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514860 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514871 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514895 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514909 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514924 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" containerName="marketplace-operator" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514936 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" containerName="marketplace-operator" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514959 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514970 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="extract-content" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514984 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.514995 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="extract-utilities" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.515146 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.515166 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="088af58f-5679-42e6-9595-945ee162f862" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.515182 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab873de1-8a57-4411-a552-1567537bdc67" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.515206 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" containerName="registry-server" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.515222 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" containerName="marketplace-operator" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.528111 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68bst"] Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.528305 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.530991 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.596848 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-catalog-content\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.597073 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dlg\" (UniqueName: \"kubernetes.io/projected/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-kube-api-access-p4dlg\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.597159 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-utilities\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.687924 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088af58f-5679-42e6-9595-945ee162f862" path="/var/lib/kubelet/pods/088af58f-5679-42e6-9595-945ee162f862/volumes" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.688711 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46bc2c2e-fed2-4cf1-afc1-2fb750553bc5" path="/var/lib/kubelet/pods/46bc2c2e-fed2-4cf1-afc1-2fb750553bc5/volumes" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.689588 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5471dfd3-e36e-405a-a517-2c1e2bc10e62" path="/var/lib/kubelet/pods/5471dfd3-e36e-405a-a517-2c1e2bc10e62/volumes" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.690843 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab873de1-8a57-4411-a552-1567537bdc67" path="/var/lib/kubelet/pods/ab873de1-8a57-4411-a552-1567537bdc67/volumes" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.691689 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15bd0e2-4143-436c-8dc2-0fc2e33cef62" path="/var/lib/kubelet/pods/b15bd0e2-4143-436c-8dc2-0fc2e33cef62/volumes" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.698855 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p4dlg\" (UniqueName: \"kubernetes.io/projected/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-kube-api-access-p4dlg\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.698916 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-utilities\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.698958 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-catalog-content\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.699664 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-utilities\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.700023 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-catalog-content\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.719750 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4dlg\" (UniqueName: \"kubernetes.io/projected/87eadfc6-4b2b-43ac-98a4-520a53bd6f94-kube-api-access-p4dlg\") pod \"certified-operators-68bst\" (UID: \"87eadfc6-4b2b-43ac-98a4-520a53bd6f94\") " pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:16 crc kubenswrapper[5116]: I1208 17:48:16.850300 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.103075 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68bst"] Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.122277 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvv8x"] Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.136813 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.140160 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.155695 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvv8x"] Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.204677 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-catalog-content\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.204756 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-utilities\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.204829 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh94c\" (UniqueName: \"kubernetes.io/projected/d51a780b-e856-4552-aa49-7f7b4b654d7e-kube-api-access-fh94c\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.305742 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fh94c\" (UniqueName: \"kubernetes.io/projected/d51a780b-e856-4552-aa49-7f7b4b654d7e-kube-api-access-fh94c\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.306390 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-catalog-content\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.306442 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-utilities\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.307645 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-utilities\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.307728 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-catalog-content\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.329917 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh94c\" (UniqueName: \"kubernetes.io/projected/d51a780b-e856-4552-aa49-7f7b4b654d7e-kube-api-access-fh94c\") pod \"redhat-marketplace-rvv8x\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.480447 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:17 crc kubenswrapper[5116]: I1208 17:48:17.686181 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvv8x"] Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.063167 5116 generic.go:358] "Generic (PLEG): container finished" podID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerID="d77bf4bac4889eac5f9fe2802fa5c933ea8b67aa328dba7263de35f5cfd543b5" exitCode=0 Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.063332 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvv8x" event={"ID":"d51a780b-e856-4552-aa49-7f7b4b654d7e","Type":"ContainerDied","Data":"d77bf4bac4889eac5f9fe2802fa5c933ea8b67aa328dba7263de35f5cfd543b5"} Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.063816 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvv8x" event={"ID":"d51a780b-e856-4552-aa49-7f7b4b654d7e","Type":"ContainerStarted","Data":"26722a9b4dd4cca5cd9ac23a778e7c63f42c66cf92cfebbdeefb82a7f400677a"} Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.067667 5116 generic.go:358] "Generic (PLEG): container finished" podID="87eadfc6-4b2b-43ac-98a4-520a53bd6f94" containerID="e7c1c718117ab0bb19252bbc90650d2208863c4dc1e14bf00c75ea27d9057916" exitCode=0 Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.067767 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68bst" event={"ID":"87eadfc6-4b2b-43ac-98a4-520a53bd6f94","Type":"ContainerDied","Data":"e7c1c718117ab0bb19252bbc90650d2208863c4dc1e14bf00c75ea27d9057916"} Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.067807 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68bst" event={"ID":"87eadfc6-4b2b-43ac-98a4-520a53bd6f94","Type":"ContainerStarted","Data":"b30a1efa40d53b7b29d235cb9f95a84396a87b48bc8b603e161ae428bd19c438"} Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.912807 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m5vg7"] Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.917143 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.919097 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:48:18 crc kubenswrapper[5116]: I1208 17:48:18.927144 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m5vg7"] Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.031681 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-catalog-content\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.031741 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-utilities\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.031820 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqs8\" (UniqueName: \"kubernetes.io/projected/4c36a4dd-ab49-4395-a54d-452e884cbb78-kube-api-access-hbqs8\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.073894 5116 generic.go:358] "Generic (PLEG): container finished" podID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerID="923e1c19e62a615ac3e5982bd7e5000edbfd497094f88587208ed226244af50a" exitCode=0 Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.073966 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvv8x" event={"ID":"d51a780b-e856-4552-aa49-7f7b4b654d7e","Type":"ContainerDied","Data":"923e1c19e62a615ac3e5982bd7e5000edbfd497094f88587208ed226244af50a"} Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.077730 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68bst" event={"ID":"87eadfc6-4b2b-43ac-98a4-520a53bd6f94","Type":"ContainerStarted","Data":"7c5fbd7e2e48e3a1ba3a895d6e861298d0cb3ca36f27083f7179ac103b83545c"} Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.133704 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-catalog-content\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.133774 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-utilities\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.133809 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbqs8\" (UniqueName: \"kubernetes.io/projected/4c36a4dd-ab49-4395-a54d-452e884cbb78-kube-api-access-hbqs8\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.134214 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-catalog-content\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.134444 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-utilities\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.158149 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbqs8\" (UniqueName: \"kubernetes.io/projected/4c36a4dd-ab49-4395-a54d-452e884cbb78-kube-api-access-hbqs8\") pod \"redhat-operators-m5vg7\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.293470 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.508330 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m5vg7"] Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.516990 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zccpk"] Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.527759 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.531048 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.532841 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zccpk"] Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.640707 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcr8r\" (UniqueName: \"kubernetes.io/projected/e9de5654-bf61-401f-8a7b-da52db4c07cd-kube-api-access-qcr8r\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.642373 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9de5654-bf61-401f-8a7b-da52db4c07cd-utilities\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.642448 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9de5654-bf61-401f-8a7b-da52db4c07cd-catalog-content\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.744048 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qcr8r\" (UniqueName: \"kubernetes.io/projected/e9de5654-bf61-401f-8a7b-da52db4c07cd-kube-api-access-qcr8r\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.744157 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9de5654-bf61-401f-8a7b-da52db4c07cd-utilities\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.744181 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9de5654-bf61-401f-8a7b-da52db4c07cd-catalog-content\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.745175 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9de5654-bf61-401f-8a7b-da52db4c07cd-catalog-content\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.745263 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9de5654-bf61-401f-8a7b-da52db4c07cd-utilities\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.770181 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcr8r\" (UniqueName: \"kubernetes.io/projected/e9de5654-bf61-401f-8a7b-da52db4c07cd-kube-api-access-qcr8r\") pod \"community-operators-zccpk\" (UID: \"e9de5654-bf61-401f-8a7b-da52db4c07cd\") " pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:19 crc kubenswrapper[5116]: I1208 17:48:19.852218 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.090945 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zccpk"] Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.093859 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvv8x" event={"ID":"d51a780b-e856-4552-aa49-7f7b4b654d7e","Type":"ContainerStarted","Data":"6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46"} Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.096081 5116 generic.go:358] "Generic (PLEG): container finished" podID="87eadfc6-4b2b-43ac-98a4-520a53bd6f94" containerID="7c5fbd7e2e48e3a1ba3a895d6e861298d0cb3ca36f27083f7179ac103b83545c" exitCode=0 Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.096225 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68bst" event={"ID":"87eadfc6-4b2b-43ac-98a4-520a53bd6f94","Type":"ContainerDied","Data":"7c5fbd7e2e48e3a1ba3a895d6e861298d0cb3ca36f27083f7179ac103b83545c"} Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.099167 5116 generic.go:358] "Generic (PLEG): container finished" podID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerID="729a65918e5042b7147b9d74c69556437c8b0eef64a26b2013ed2eb9ca3315f2" exitCode=0 Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.099319 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vg7" event={"ID":"4c36a4dd-ab49-4395-a54d-452e884cbb78","Type":"ContainerDied","Data":"729a65918e5042b7147b9d74c69556437c8b0eef64a26b2013ed2eb9ca3315f2"} Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.099356 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vg7" event={"ID":"4c36a4dd-ab49-4395-a54d-452e884cbb78","Type":"ContainerStarted","Data":"7336da07d5e9d75f2199d5bb584dbb1de20cf72579b8d8a93796ca3ae65ddb9c"} Dec 08 17:48:20 crc kubenswrapper[5116]: W1208 17:48:20.099963 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9de5654_bf61_401f_8a7b_da52db4c07cd.slice/crio-47b5ed4acf2e84a5cb0f9259cce777566e62f0ec371cdb6de09bf2a58e8735c5 WatchSource:0}: Error finding container 47b5ed4acf2e84a5cb0f9259cce777566e62f0ec371cdb6de09bf2a58e8735c5: Status 404 returned error can't find the container with id 47b5ed4acf2e84a5cb0f9259cce777566e62f0ec371cdb6de09bf2a58e8735c5 Dec 08 17:48:20 crc kubenswrapper[5116]: I1208 17:48:20.116005 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvv8x" podStartSLOduration=2.531824135 podStartE2EDuration="3.115939098s" podCreationTimestamp="2025-12-08 17:48:17 +0000 UTC" firstStartedPulling="2025-12-08 17:48:18.064723221 +0000 UTC m=+367.861846475" lastFinishedPulling="2025-12-08 17:48:18.648838194 +0000 UTC m=+368.445961438" observedRunningTime="2025-12-08 17:48:20.110756304 +0000 UTC m=+369.907879568" watchObservedRunningTime="2025-12-08 17:48:20.115939098 +0000 UTC m=+369.913062332" Dec 08 17:48:21 crc kubenswrapper[5116]: I1208 17:48:21.107294 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vg7" event={"ID":"4c36a4dd-ab49-4395-a54d-452e884cbb78","Type":"ContainerStarted","Data":"f0e1bd00ed59310424b92be1d7efc40b464b4f405dc5efd847a7fcda96da605e"} Dec 08 17:48:21 crc kubenswrapper[5116]: I1208 17:48:21.111224 5116 generic.go:358] "Generic (PLEG): container finished" podID="e9de5654-bf61-401f-8a7b-da52db4c07cd" containerID="0455764fc273826abd2e3005733fc8172fb0f272c09f8584f84a8a5111d22637" exitCode=0 Dec 08 17:48:21 crc kubenswrapper[5116]: I1208 17:48:21.111277 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zccpk" event={"ID":"e9de5654-bf61-401f-8a7b-da52db4c07cd","Type":"ContainerDied","Data":"0455764fc273826abd2e3005733fc8172fb0f272c09f8584f84a8a5111d22637"} Dec 08 17:48:21 crc kubenswrapper[5116]: I1208 17:48:21.111321 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zccpk" event={"ID":"e9de5654-bf61-401f-8a7b-da52db4c07cd","Type":"ContainerStarted","Data":"47b5ed4acf2e84a5cb0f9259cce777566e62f0ec371cdb6de09bf2a58e8735c5"} Dec 08 17:48:21 crc kubenswrapper[5116]: I1208 17:48:21.127990 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68bst" event={"ID":"87eadfc6-4b2b-43ac-98a4-520a53bd6f94","Type":"ContainerStarted","Data":"20a7d50a995c38cbb99b143f1247ef3f308c61ca35344323e721849aa27e997e"} Dec 08 17:48:21 crc kubenswrapper[5116]: I1208 17:48:21.159068 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-68bst" podStartSLOduration=4.42739304 podStartE2EDuration="5.159047988s" podCreationTimestamp="2025-12-08 17:48:16 +0000 UTC" firstStartedPulling="2025-12-08 17:48:18.068311424 +0000 UTC m=+367.865434658" lastFinishedPulling="2025-12-08 17:48:18.799966362 +0000 UTC m=+368.597089606" observedRunningTime="2025-12-08 17:48:21.156106182 +0000 UTC m=+370.953229416" watchObservedRunningTime="2025-12-08 17:48:21.159047988 +0000 UTC m=+370.956171232" Dec 08 17:48:22 crc kubenswrapper[5116]: I1208 17:48:22.135109 5116 generic.go:358] "Generic (PLEG): container finished" podID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerID="f0e1bd00ed59310424b92be1d7efc40b464b4f405dc5efd847a7fcda96da605e" exitCode=0 Dec 08 17:48:22 crc kubenswrapper[5116]: I1208 17:48:22.135164 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vg7" event={"ID":"4c36a4dd-ab49-4395-a54d-452e884cbb78","Type":"ContainerDied","Data":"f0e1bd00ed59310424b92be1d7efc40b464b4f405dc5efd847a7fcda96da605e"} Dec 08 17:48:22 crc kubenswrapper[5116]: I1208 17:48:22.138875 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zccpk" event={"ID":"e9de5654-bf61-401f-8a7b-da52db4c07cd","Type":"ContainerStarted","Data":"4bf8a39a18f2c89bb5b49532a8f64a172d03e9c5423b6040cbcf1d9239f8de93"} Dec 08 17:48:23 crc kubenswrapper[5116]: I1208 17:48:23.147042 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vg7" event={"ID":"4c36a4dd-ab49-4395-a54d-452e884cbb78","Type":"ContainerStarted","Data":"a52aec32a446d70666ef3b12eb9c92753f89ddff8f8f0c154511c79abffc1ac4"} Dec 08 17:48:23 crc kubenswrapper[5116]: I1208 17:48:23.150823 5116 generic.go:358] "Generic (PLEG): container finished" podID="e9de5654-bf61-401f-8a7b-da52db4c07cd" containerID="4bf8a39a18f2c89bb5b49532a8f64a172d03e9c5423b6040cbcf1d9239f8de93" exitCode=0 Dec 08 17:48:23 crc kubenswrapper[5116]: I1208 17:48:23.150972 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zccpk" event={"ID":"e9de5654-bf61-401f-8a7b-da52db4c07cd","Type":"ContainerDied","Data":"4bf8a39a18f2c89bb5b49532a8f64a172d03e9c5423b6040cbcf1d9239f8de93"} Dec 08 17:48:23 crc kubenswrapper[5116]: I1208 17:48:23.163833 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m5vg7" podStartSLOduration=4.549881805 podStartE2EDuration="5.163814008s" podCreationTimestamp="2025-12-08 17:48:18 +0000 UTC" firstStartedPulling="2025-12-08 17:48:20.100647774 +0000 UTC m=+369.897771008" lastFinishedPulling="2025-12-08 17:48:20.714579977 +0000 UTC m=+370.511703211" observedRunningTime="2025-12-08 17:48:23.162070603 +0000 UTC m=+372.959193847" watchObservedRunningTime="2025-12-08 17:48:23.163814008 +0000 UTC m=+372.960937252" Dec 08 17:48:24 crc kubenswrapper[5116]: I1208 17:48:24.160936 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zccpk" event={"ID":"e9de5654-bf61-401f-8a7b-da52db4c07cd","Type":"ContainerStarted","Data":"20f1a67e0fee0f87e74e7d2d0ed35912cd17a50ae1d0b79eedbc779e303357f8"} Dec 08 17:48:26 crc kubenswrapper[5116]: I1208 17:48:26.851603 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:26 crc kubenswrapper[5116]: I1208 17:48:26.852859 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:26 crc kubenswrapper[5116]: I1208 17:48:26.902999 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:26 crc kubenswrapper[5116]: I1208 17:48:26.924665 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zccpk" podStartSLOduration=7.405128306 podStartE2EDuration="7.924625113s" podCreationTimestamp="2025-12-08 17:48:19 +0000 UTC" firstStartedPulling="2025-12-08 17:48:21.112444137 +0000 UTC m=+370.909567371" lastFinishedPulling="2025-12-08 17:48:21.631940944 +0000 UTC m=+371.429064178" observedRunningTime="2025-12-08 17:48:24.196058657 +0000 UTC m=+373.993181891" watchObservedRunningTime="2025-12-08 17:48:26.924625113 +0000 UTC m=+376.721748347" Dec 08 17:48:27 crc kubenswrapper[5116]: I1208 17:48:27.227987 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-68bst" Dec 08 17:48:27 crc kubenswrapper[5116]: I1208 17:48:27.481279 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:27 crc kubenswrapper[5116]: I1208 17:48:27.481367 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:27 crc kubenswrapper[5116]: I1208 17:48:27.537318 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:28 crc kubenswrapper[5116]: I1208 17:48:28.230797 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:48:29 crc kubenswrapper[5116]: I1208 17:48:29.294315 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:29 crc kubenswrapper[5116]: I1208 17:48:29.294474 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:29 crc kubenswrapper[5116]: I1208 17:48:29.333130 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:29 crc kubenswrapper[5116]: I1208 17:48:29.853726 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:29 crc kubenswrapper[5116]: I1208 17:48:29.853797 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:29 crc kubenswrapper[5116]: I1208 17:48:29.897208 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:48:30 crc kubenswrapper[5116]: I1208 17:48:30.245353 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 17:48:30 crc kubenswrapper[5116]: I1208 17:48:30.247518 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zccpk" Dec 08 17:49:03 crc kubenswrapper[5116]: I1208 17:49:03.335150 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:49:03 crc kubenswrapper[5116]: I1208 17:49:03.335757 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:49:33 crc kubenswrapper[5116]: I1208 17:49:33.334813 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:49:33 crc kubenswrapper[5116]: I1208 17:49:33.335474 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.335013 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.335852 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.335925 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.336808 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a91b9acb99507a3f905fce585b6049ec73b8aafa949cf4edc05cfa9067a094d"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.336876 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://1a91b9acb99507a3f905fce585b6049ec73b8aafa949cf4edc05cfa9067a094d" gracePeriod=600 Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.929920 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="1a91b9acb99507a3f905fce585b6049ec73b8aafa949cf4edc05cfa9067a094d" exitCode=0 Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.929978 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"1a91b9acb99507a3f905fce585b6049ec73b8aafa949cf4edc05cfa9067a094d"} Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.930857 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"59b8c5ef8a713cbaa73c58d18697f0e38bfd14c8ab9516c06d59c3d9022ca4ac"} Dec 08 17:50:03 crc kubenswrapper[5116]: I1208 17:50:03.930889 5116 scope.go:117] "RemoveContainer" containerID="013afc9b2137a670c234a5ed56a7fe32904cb1f1413dc085edcb58fd24608faa" Dec 08 17:50:11 crc kubenswrapper[5116]: I1208 17:50:11.114793 5116 scope.go:117] "RemoveContainer" containerID="77b634b534403161f71aeeb1268e1a21f205b1da4c6de436cbe8adb4a8468bab" Dec 08 17:51:11 crc kubenswrapper[5116]: I1208 17:51:11.151884 5116 scope.go:117] "RemoveContainer" containerID="215fefac8458ae8270ff609404abd09f83a6bbb6ff9566788cc0593c35b8a6a5" Dec 08 17:51:11 crc kubenswrapper[5116]: I1208 17:51:11.176499 5116 scope.go:117] "RemoveContainer" containerID="cca6018c2d852a7073eebb6ac7153f8e5ae717084342cdcabd2e78e9b08e81c5" Dec 08 17:51:11 crc kubenswrapper[5116]: I1208 17:51:11.192036 5116 scope.go:117] "RemoveContainer" containerID="5f5d27d585f3b24c468fb5a08140f0cc3ae2343255ce80ab73146929f20d61fa" Dec 08 17:51:11 crc kubenswrapper[5116]: I1208 17:51:11.206780 5116 scope.go:117] "RemoveContainer" containerID="b252ad98e404697abd089451f00b97b67a8626fe90380d36b5d7f40ffcc146b9" Dec 08 17:52:03 crc kubenswrapper[5116]: I1208 17:52:03.335929 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:52:03 crc kubenswrapper[5116]: I1208 17:52:03.339187 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:52:10 crc kubenswrapper[5116]: I1208 17:52:10.914480 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:52:10 crc kubenswrapper[5116]: I1208 17:52:10.943926 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:52:10 crc kubenswrapper[5116]: I1208 17:52:10.959270 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:52:10 crc kubenswrapper[5116]: I1208 17:52:10.990116 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:52:13 crc kubenswrapper[5116]: I1208 17:52:13.856132 5116 ???:1] "http: TLS handshake error from 192.168.126.11:58102: no serving certificate available for the kubelet" Dec 08 17:52:33 crc kubenswrapper[5116]: I1208 17:52:33.335955 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:52:33 crc kubenswrapper[5116]: I1208 17:52:33.337145 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:53:03 crc kubenswrapper[5116]: I1208 17:53:03.335099 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:53:03 crc kubenswrapper[5116]: I1208 17:53:03.335836 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:53:03 crc kubenswrapper[5116]: I1208 17:53:03.335904 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:53:03 crc kubenswrapper[5116]: I1208 17:53:03.336395 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59b8c5ef8a713cbaa73c58d18697f0e38bfd14c8ab9516c06d59c3d9022ca4ac"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:53:03 crc kubenswrapper[5116]: I1208 17:53:03.336471 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://59b8c5ef8a713cbaa73c58d18697f0e38bfd14c8ab9516c06d59c3d9022ca4ac" gracePeriod=600 Dec 08 17:53:04 crc kubenswrapper[5116]: I1208 17:53:04.191641 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="59b8c5ef8a713cbaa73c58d18697f0e38bfd14c8ab9516c06d59c3d9022ca4ac" exitCode=0 Dec 08 17:53:04 crc kubenswrapper[5116]: I1208 17:53:04.191733 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"59b8c5ef8a713cbaa73c58d18697f0e38bfd14c8ab9516c06d59c3d9022ca4ac"} Dec 08 17:53:04 crc kubenswrapper[5116]: I1208 17:53:04.192378 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"a8c9ea73a3f3a6aeb913be43880595d7b2a74416932fa51f8351d035f08e4a16"} Dec 08 17:53:04 crc kubenswrapper[5116]: I1208 17:53:04.192403 5116 scope.go:117] "RemoveContainer" containerID="1a91b9acb99507a3f905fce585b6049ec73b8aafa949cf4edc05cfa9067a094d" Dec 08 17:53:58 crc kubenswrapper[5116]: I1208 17:53:58.894952 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv"] Dec 08 17:53:58 crc kubenswrapper[5116]: I1208 17:53:58.896385 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="kube-rbac-proxy" containerID="cri-o://82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6" gracePeriod=30 Dec 08 17:53:58 crc kubenswrapper[5116]: I1208 17:53:58.896382 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="ovnkube-cluster-manager" containerID="cri-o://0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.122692 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zm56h"] Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.123571 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-controller" containerID="cri-o://fb5c408faae317c65e7ecee5588f0724734d49d1b4a3ae27e669fed7d9f1d56f" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.123594 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://df74991b9351b83a6afafbbed676c14a19d840f12be07cefd14b14577801ad8e" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.123617 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="nbdb" containerID="cri-o://8f9ddf9b40be2523a293c7a25dcd093d1064c0ea5ac00cfcab147d4e52c1b577" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.123584 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="sbdb" containerID="cri-o://d43248e58f8ef79a4ca47051d7abc1ebda6dfe4b3a3894c0a42cf2eadd863a40" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.123637 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="northd" containerID="cri-o://c2d65cd5cbd25ba2aa8ee1ee5d3ee19de672253be1241f5dd6272ffbbcf572b9" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.123651 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-acl-logging" containerID="cri-o://396bbb6d70fc2a226fa82c18e9fef2e42c88aab08db97f7b8253ac1fedf99524" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.123729 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-node" containerID="cri-o://bd3f2516ba42578f60aeff92565eb4eed9411fc7b0a498f5342dd7e9e4c0475c" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.157892 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.161718 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovnkube-controller" containerID="cri-o://0165060b7c7c730bc40c1f8e6a0e75452412dc4249378fb9fd54d4cfd49b82d6" gracePeriod=30 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.186808 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w"] Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.187485 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="ovnkube-cluster-manager" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.187510 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="ovnkube-cluster-manager" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.187547 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="kube-rbac-proxy" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.187553 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="kube-rbac-proxy" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.187650 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="kube-rbac-proxy" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.187675 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerName="ovnkube-cluster-manager" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.193052 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.244095 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmvhd\" (UniqueName: \"kubernetes.io/projected/2e7c2199-9693-42b9-9431-2b12b5abe1d1-kube-api-access-vmvhd\") pod \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.244813 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovn-control-plane-metrics-cert\") pod \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.244858 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-env-overrides\") pod \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.245083 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovnkube-config\") pod \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\" (UID: \"2e7c2199-9693-42b9-9431-2b12b5abe1d1\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.245665 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03c53809-e242-4952-943b-cecd28ab49d4-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.245784 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03c53809-e242-4952-943b-cecd28ab49d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.246105 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhsfg\" (UniqueName: \"kubernetes.io/projected/03c53809-e242-4952-943b-cecd28ab49d4-kube-api-access-bhsfg\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.246154 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03c53809-e242-4952-943b-cecd28ab49d4-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.246644 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2e7c2199-9693-42b9-9431-2b12b5abe1d1" (UID: "2e7c2199-9693-42b9-9431-2b12b5abe1d1"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.246791 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2e7c2199-9693-42b9-9431-2b12b5abe1d1" (UID: "2e7c2199-9693-42b9-9431-2b12b5abe1d1"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.255105 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7c2199-9693-42b9-9431-2b12b5abe1d1-kube-api-access-vmvhd" (OuterVolumeSpecName: "kube-api-access-vmvhd") pod "2e7c2199-9693-42b9-9431-2b12b5abe1d1" (UID: "2e7c2199-9693-42b9-9431-2b12b5abe1d1"). InnerVolumeSpecName "kube-api-access-vmvhd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.256928 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "2e7c2199-9693-42b9-9431-2b12b5abe1d1" (UID: "2e7c2199-9693-42b9-9431-2b12b5abe1d1"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.348455 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03c53809-e242-4952-943b-cecd28ab49d4-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.348559 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03c53809-e242-4952-943b-cecd28ab49d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.348603 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhsfg\" (UniqueName: \"kubernetes.io/projected/03c53809-e242-4952-943b-cecd28ab49d4-kube-api-access-bhsfg\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.348636 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03c53809-e242-4952-943b-cecd28ab49d4-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.349710 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.349774 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vmvhd\" (UniqueName: \"kubernetes.io/projected/2e7c2199-9693-42b9-9431-2b12b5abe1d1-kube-api-access-vmvhd\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.349793 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e7c2199-9693-42b9-9431-2b12b5abe1d1-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.349806 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e7c2199-9693-42b9-9431-2b12b5abe1d1-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.351007 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03c53809-e242-4952-943b-cecd28ab49d4-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.351142 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03c53809-e242-4952-943b-cecd28ab49d4-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.356550 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03c53809-e242-4952-943b-cecd28ab49d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.370371 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhsfg\" (UniqueName: \"kubernetes.io/projected/03c53809-e242-4952-943b-cecd28ab49d4-kube-api-access-bhsfg\") pod \"ovnkube-control-plane-97c9b6c48-nxr9w\" (UID: \"03c53809-e242-4952-943b-cecd28ab49d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.563813 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zm56h_17cf2230-8798-4fb0-b89b-43901121fd07/ovn-acl-logging/0.log" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.564523 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zm56h_17cf2230-8798-4fb0-b89b-43901121fd07/ovn-controller/0.log" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565110 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="0165060b7c7c730bc40c1f8e6a0e75452412dc4249378fb9fd54d4cfd49b82d6" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565156 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="d43248e58f8ef79a4ca47051d7abc1ebda6dfe4b3a3894c0a42cf2eadd863a40" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565169 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="8f9ddf9b40be2523a293c7a25dcd093d1064c0ea5ac00cfcab147d4e52c1b577" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565180 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="c2d65cd5cbd25ba2aa8ee1ee5d3ee19de672253be1241f5dd6272ffbbcf572b9" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565189 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="df74991b9351b83a6afafbbed676c14a19d840f12be07cefd14b14577801ad8e" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565196 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="bd3f2516ba42578f60aeff92565eb4eed9411fc7b0a498f5342dd7e9e4c0475c" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565205 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="396bbb6d70fc2a226fa82c18e9fef2e42c88aab08db97f7b8253ac1fedf99524" exitCode=143 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565205 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"0165060b7c7c730bc40c1f8e6a0e75452412dc4249378fb9fd54d4cfd49b82d6"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565287 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"d43248e58f8ef79a4ca47051d7abc1ebda6dfe4b3a3894c0a42cf2eadd863a40"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565305 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"8f9ddf9b40be2523a293c7a25dcd093d1064c0ea5ac00cfcab147d4e52c1b577"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565318 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"c2d65cd5cbd25ba2aa8ee1ee5d3ee19de672253be1241f5dd6272ffbbcf572b9"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565329 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"df74991b9351b83a6afafbbed676c14a19d840f12be07cefd14b14577801ad8e"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565214 5116 generic.go:358] "Generic (PLEG): container finished" podID="17cf2230-8798-4fb0-b89b-43901121fd07" containerID="fb5c408faae317c65e7ecee5588f0724734d49d1b4a3ae27e669fed7d9f1d56f" exitCode=143 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565340 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"bd3f2516ba42578f60aeff92565eb4eed9411fc7b0a498f5342dd7e9e4c0475c"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565473 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"396bbb6d70fc2a226fa82c18e9fef2e42c88aab08db97f7b8253ac1fedf99524"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.565500 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"fb5c408faae317c65e7ecee5588f0724734d49d1b4a3ae27e669fed7d9f1d56f"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.567641 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.567702 5116 generic.go:358] "Generic (PLEG): container finished" podID="84b46b92-c78c-44c8-a27b-4a20c47acd75" containerID="44ea695962c16bd4fd8ec8a0d9643b6428845ee38438b9ab3c2ae7068995d383" exitCode=2 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.567797 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8wqqf" event={"ID":"84b46b92-c78c-44c8-a27b-4a20c47acd75","Type":"ContainerDied","Data":"44ea695962c16bd4fd8ec8a0d9643b6428845ee38438b9ab3c2ae7068995d383"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.568528 5116 scope.go:117] "RemoveContainer" containerID="44ea695962c16bd4fd8ec8a0d9643b6428845ee38438b9ab3c2ae7068995d383" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.581524 5116 generic.go:358] "Generic (PLEG): container finished" podID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerID="0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.581622 5116 generic.go:358] "Generic (PLEG): container finished" podID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" containerID="82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6" exitCode=0 Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.581646 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.581661 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.581647 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" event={"ID":"2e7c2199-9693-42b9-9431-2b12b5abe1d1","Type":"ContainerDied","Data":"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.582301 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" event={"ID":"2e7c2199-9693-42b9-9431-2b12b5abe1d1","Type":"ContainerDied","Data":"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.582338 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" event={"ID":"2e7c2199-9693-42b9-9431-2b12b5abe1d1","Type":"ContainerDied","Data":"229ee2edfaf5abf4c8ad8eac873cdef03d0b78f04eea7094967a5d97365974ae"} Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.582360 5116 scope.go:117] "RemoveContainer" containerID="0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.582644 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.608697 5116 scope.go:117] "RemoveContainer" containerID="82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.629615 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv"] Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.636058 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-47dgv"] Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.638603 5116 scope.go:117] "RemoveContainer" containerID="0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9" Dec 08 17:53:59 crc kubenswrapper[5116]: E1208 17:53:59.640574 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9\": container with ID starting with 0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9 not found: ID does not exist" containerID="0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.640643 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9"} err="failed to get container status \"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9\": rpc error: code = NotFound desc = could not find container \"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9\": container with ID starting with 0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9 not found: ID does not exist" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.640675 5116 scope.go:117] "RemoveContainer" containerID="82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6" Dec 08 17:53:59 crc kubenswrapper[5116]: E1208 17:53:59.642352 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6\": container with ID starting with 82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6 not found: ID does not exist" containerID="82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.642398 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6"} err="failed to get container status \"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6\": rpc error: code = NotFound desc = could not find container \"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6\": container with ID starting with 82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6 not found: ID does not exist" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.642440 5116 scope.go:117] "RemoveContainer" containerID="0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.642963 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9"} err="failed to get container status \"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9\": rpc error: code = NotFound desc = could not find container \"0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9\": container with ID starting with 0556a893d5b65176e1189928e57b34494255b34e87874a61c1b9302c0239d0f9 not found: ID does not exist" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.643071 5116 scope.go:117] "RemoveContainer" containerID="82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.643613 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6"} err="failed to get container status \"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6\": rpc error: code = NotFound desc = could not find container \"82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6\": container with ID starting with 82b63e8142ef2288109cc5c0882142beb2ca87da33214118d53db07a6595d8b6 not found: ID does not exist" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.841115 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zm56h_17cf2230-8798-4fb0-b89b-43901121fd07/ovn-acl-logging/0.log" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.842958 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zm56h_17cf2230-8798-4fb0-b89b-43901121fd07/ovn-controller/0.log" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.843964 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.957716 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-config\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958157 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-bin\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958175 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-slash\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958285 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958339 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-var-lib-openvswitch\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958389 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-log-socket\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958416 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-ovn\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958426 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-slash" (OuterVolumeSpecName: "host-slash") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958502 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958453 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958471 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-log-socket" (OuterVolumeSpecName: "log-socket") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958484 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-ovn-kubernetes\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958534 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958632 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s9wf\" (UniqueName: \"kubernetes.io/projected/17cf2230-8798-4fb0-b89b-43901121fd07-kube-api-access-8s9wf\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958659 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-var-lib-cni-networks-ovn-kubernetes\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959356 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-script-lib\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.958730 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959385 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-openvswitch\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959034 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959430 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-node-log\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959466 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-etc-openvswitch\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959497 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-env-overrides\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959533 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-netd\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959550 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17cf2230-8798-4fb0-b89b-43901121fd07-ovn-node-metrics-cert\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959573 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-systemd-units\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959596 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-netns\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959612 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-kubelet\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959630 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-systemd\") pod \"17cf2230-8798-4fb0-b89b-43901121fd07\" (UID: \"17cf2230-8798-4fb0-b89b-43901121fd07\") " Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.959959 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960003 5116 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960020 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960025 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960068 5116 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960107 5116 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960120 5116 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960133 5116 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960146 5116 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960179 5116 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960215 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960278 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-node-log" (OuterVolumeSpecName: "node-log") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960313 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.960722 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.961106 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.961183 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.961218 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.976497 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17cf2230-8798-4fb0-b89b-43901121fd07-kube-api-access-8s9wf" (OuterVolumeSpecName: "kube-api-access-8s9wf") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "kube-api-access-8s9wf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.977226 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17cf2230-8798-4fb0-b89b-43901121fd07-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.992979 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m8bmb"] Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993678 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-acl-logging" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993706 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-acl-logging" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993732 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-controller" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993740 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-controller" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993754 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-node" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993762 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-node" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993777 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993785 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993798 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="northd" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993808 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="northd" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993819 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovnkube-controller" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993826 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovnkube-controller" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993840 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="nbdb" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993846 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="nbdb" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993858 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="sbdb" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993865 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="sbdb" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993879 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kubecfg-setup" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.993885 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kubecfg-setup" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994009 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovnkube-controller" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994023 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="sbdb" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994031 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994038 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-acl-logging" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994044 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="kube-rbac-proxy-node" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994052 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="nbdb" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994060 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="ovn-controller" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.994067 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" containerName="northd" Dec 08 17:53:59 crc kubenswrapper[5116]: I1208 17:53:59.995711 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "17cf2230-8798-4fb0-b89b-43901121fd07" (UID: "17cf2230-8798-4fb0-b89b-43901121fd07"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.007591 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061339 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovnkube-script-lib\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061585 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-slash\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061671 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdm4\" (UniqueName: \"kubernetes.io/projected/b7ba14bf-585a-422f-b024-2288c4d8e54f-kube-api-access-gpdm4\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061726 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-systemd\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061757 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-var-lib-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061837 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-kubelet\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061859 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovnkube-config\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.061960 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-run-ovn-kubernetes\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062036 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-cni-netd\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062119 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-systemd-units\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062193 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-ovn\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062356 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-env-overrides\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062626 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovn-node-metrics-cert\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062675 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-cni-bin\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062715 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062748 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-node-log\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062775 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-etc-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062805 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-log-socket\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062847 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-run-netns\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062880 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062939 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8s9wf\" (UniqueName: \"kubernetes.io/projected/17cf2230-8798-4fb0-b89b-43901121fd07-kube-api-access-8s9wf\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062956 5116 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062968 5116 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062981 5116 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.062993 5116 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.063005 5116 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17cf2230-8798-4fb0-b89b-43901121fd07-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.063017 5116 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.063028 5116 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17cf2230-8798-4fb0-b89b-43901121fd07-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.063045 5116 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.063060 5116 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.063072 5116 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.063084 5116 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17cf2230-8798-4fb0-b89b-43901121fd07-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164471 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-systemd\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164533 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-var-lib-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164555 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-kubelet\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164607 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-kubelet\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164638 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-systemd\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164661 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovnkube-config\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164681 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-run-ovn-kubernetes\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164701 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-run-ovn-kubernetes\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164724 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-cni-netd\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164807 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-systemd-units\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164871 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-ovn\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164893 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-env-overrides\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164916 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovn-node-metrics-cert\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164958 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-cni-bin\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165014 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165060 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-node-log\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165085 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-etc-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165118 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-log-socket\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165161 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-run-netns\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165179 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165206 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovnkube-script-lib\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165235 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-slash\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165270 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdm4\" (UniqueName: \"kubernetes.io/projected/b7ba14bf-585a-422f-b024-2288c4d8e54f-kube-api-access-gpdm4\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.164646 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-var-lib-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165650 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-cni-netd\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165665 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovnkube-config\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165676 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-systemd-units\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165715 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-ovn\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.165748 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-etc-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166255 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-env-overrides\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166285 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-cni-bin\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166357 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-log-socket\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166396 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-run-netns\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166434 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-run-openvswitch\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166430 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166477 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-host-slash\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.166516 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7ba14bf-585a-422f-b024-2288c4d8e54f-node-log\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.167147 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovnkube-script-lib\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.178406 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7ba14bf-585a-422f-b024-2288c4d8e54f-ovn-node-metrics-cert\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.181898 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdm4\" (UniqueName: \"kubernetes.io/projected/b7ba14bf-585a-422f-b024-2288c4d8e54f-kube-api-access-gpdm4\") pod \"ovnkube-node-m8bmb\" (UID: \"b7ba14bf-585a-422f-b024-2288c4d8e54f\") " pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.325086 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:00 crc kubenswrapper[5116]: W1208 17:54:00.346601 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7ba14bf_585a_422f_b024_2288c4d8e54f.slice/crio-a1969cb66028389e6ff1c8c2ca576c2be5ff55d422df13673df08e8cd340ea28 WatchSource:0}: Error finding container a1969cb66028389e6ff1c8c2ca576c2be5ff55d422df13673df08e8cd340ea28: Status 404 returned error can't find the container with id a1969cb66028389e6ff1c8c2ca576c2be5ff55d422df13673df08e8cd340ea28 Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.594369 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zm56h_17cf2230-8798-4fb0-b89b-43901121fd07/ovn-acl-logging/0.log" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.595334 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zm56h_17cf2230-8798-4fb0-b89b-43901121fd07/ovn-controller/0.log" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.595927 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" event={"ID":"17cf2230-8798-4fb0-b89b-43901121fd07","Type":"ContainerDied","Data":"395cd986e343d46252d6527e53b3a1cd2edbe59586cfa99d5c32d10497c03295"} Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.596050 5116 scope.go:117] "RemoveContainer" containerID="0165060b7c7c730bc40c1f8e6a0e75452412dc4249378fb9fd54d4cfd49b82d6" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.596078 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zm56h" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.599678 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.599882 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8wqqf" event={"ID":"84b46b92-c78c-44c8-a27b-4a20c47acd75","Type":"ContainerStarted","Data":"c89af348e9c58c451a89e560178cc9d0eb49cf73e9c89401c618f1567bfaea8d"} Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.608571 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" event={"ID":"03c53809-e242-4952-943b-cecd28ab49d4","Type":"ContainerStarted","Data":"7cc64a2950e3bbb2e3a040820384d3c4bfdb50a4a86bf9411d92c87c19622a83"} Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.610110 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"a1969cb66028389e6ff1c8c2ca576c2be5ff55d422df13673df08e8cd340ea28"} Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.626474 5116 scope.go:117] "RemoveContainer" containerID="d43248e58f8ef79a4ca47051d7abc1ebda6dfe4b3a3894c0a42cf2eadd863a40" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.647834 5116 scope.go:117] "RemoveContainer" containerID="8f9ddf9b40be2523a293c7a25dcd093d1064c0ea5ac00cfcab147d4e52c1b577" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.666310 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zm56h"] Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.670453 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zm56h"] Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.684292 5116 scope.go:117] "RemoveContainer" containerID="c2d65cd5cbd25ba2aa8ee1ee5d3ee19de672253be1241f5dd6272ffbbcf572b9" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.697747 5116 scope.go:117] "RemoveContainer" containerID="df74991b9351b83a6afafbbed676c14a19d840f12be07cefd14b14577801ad8e" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.714874 5116 scope.go:117] "RemoveContainer" containerID="bd3f2516ba42578f60aeff92565eb4eed9411fc7b0a498f5342dd7e9e4c0475c" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.730122 5116 scope.go:117] "RemoveContainer" containerID="396bbb6d70fc2a226fa82c18e9fef2e42c88aab08db97f7b8253ac1fedf99524" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.752339 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17cf2230-8798-4fb0-b89b-43901121fd07" path="/var/lib/kubelet/pods/17cf2230-8798-4fb0-b89b-43901121fd07/volumes" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.754138 5116 scope.go:117] "RemoveContainer" containerID="fb5c408faae317c65e7ecee5588f0724734d49d1b4a3ae27e669fed7d9f1d56f" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.755465 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7c2199-9693-42b9-9431-2b12b5abe1d1" path="/var/lib/kubelet/pods/2e7c2199-9693-42b9-9431-2b12b5abe1d1/volumes" Dec 08 17:54:00 crc kubenswrapper[5116]: I1208 17:54:00.777737 5116 scope.go:117] "RemoveContainer" containerID="1fbed6896daac43c71c23a2cd13d426172358ba9b9b9199189fb01846868e0ba" Dec 08 17:54:01 crc kubenswrapper[5116]: I1208 17:54:01.632867 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" event={"ID":"03c53809-e242-4952-943b-cecd28ab49d4","Type":"ContainerStarted","Data":"89d3bb2263dbff3e1530270f9d64aa5c936c53d80c1789bbe6b759062a9de3f0"} Dec 08 17:54:01 crc kubenswrapper[5116]: I1208 17:54:01.632926 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" event={"ID":"03c53809-e242-4952-943b-cecd28ab49d4","Type":"ContainerStarted","Data":"1d7b4abdb1eb3bfb122ce66d7c6fff49cf7b008576fba7eea2926e9cdb3f7466"} Dec 08 17:54:01 crc kubenswrapper[5116]: I1208 17:54:01.636457 5116 generic.go:358] "Generic (PLEG): container finished" podID="b7ba14bf-585a-422f-b024-2288c4d8e54f" containerID="2130fcc7244df445c9e554dcf601a63aeb029bcd60b89f7eeb9df711c73927ef" exitCode=0 Dec 08 17:54:01 crc kubenswrapper[5116]: I1208 17:54:01.636538 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerDied","Data":"2130fcc7244df445c9e554dcf601a63aeb029bcd60b89f7eeb9df711c73927ef"} Dec 08 17:54:01 crc kubenswrapper[5116]: I1208 17:54:01.650315 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxr9w" podStartSLOduration=3.650292516 podStartE2EDuration="3.650292516s" podCreationTimestamp="2025-12-08 17:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:54:01.649150199 +0000 UTC m=+711.446273433" watchObservedRunningTime="2025-12-08 17:54:01.650292516 +0000 UTC m=+711.447415750" Dec 08 17:54:02 crc kubenswrapper[5116]: I1208 17:54:02.645228 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"732046377f106862797ded215e0becc95185468ce1d3d758b51fce68340758f4"} Dec 08 17:54:02 crc kubenswrapper[5116]: I1208 17:54:02.645591 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"13a6c62d6649daa7b6b475c3f54825bdf1349e877475270e3304fe10610b4f60"} Dec 08 17:54:02 crc kubenswrapper[5116]: I1208 17:54:02.645606 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"45b3788cf3b07e47e570ddeecb36fc841706a5a65b3a9a9749e99ca51dd02410"} Dec 08 17:54:02 crc kubenswrapper[5116]: I1208 17:54:02.645617 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"cfe8d18e721cea04941ed36b91169ce5ece59dc2f9241a6310cdc83ac3b259ca"} Dec 08 17:54:03 crc kubenswrapper[5116]: I1208 17:54:03.670038 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"1d586a99284a57668d812f3620b9a9b0eddbdd1ff6ab6d1157a3d39fc2565e45"} Dec 08 17:54:03 crc kubenswrapper[5116]: I1208 17:54:03.670148 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"dab49f25724809f04cb163280ea92b3112bfd46c541335cba773304b4c66bc34"} Dec 08 17:54:06 crc kubenswrapper[5116]: I1208 17:54:06.711111 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"aa1dcc578c0509c81ce98a79fdaed38ae70b3ba721c2b9dfcddbfcd7aa915a2e"} Dec 08 17:54:08 crc kubenswrapper[5116]: I1208 17:54:08.727334 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" event={"ID":"b7ba14bf-585a-422f-b024-2288c4d8e54f","Type":"ContainerStarted","Data":"04a16cb3805cbf7433658369e3c4d0e75b6071e7d5183f43dfc62e436d00f6a9"} Dec 08 17:54:08 crc kubenswrapper[5116]: I1208 17:54:08.727784 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:08 crc kubenswrapper[5116]: I1208 17:54:08.727836 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:08 crc kubenswrapper[5116]: I1208 17:54:08.727854 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:08 crc kubenswrapper[5116]: I1208 17:54:08.761822 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" podStartSLOduration=9.761802187 podStartE2EDuration="9.761802187s" podCreationTimestamp="2025-12-08 17:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:54:08.760392053 +0000 UTC m=+718.557515307" watchObservedRunningTime="2025-12-08 17:54:08.761802187 +0000 UTC m=+718.558925441" Dec 08 17:54:08 crc kubenswrapper[5116]: I1208 17:54:08.764291 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:08 crc kubenswrapper[5116]: I1208 17:54:08.766523 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.417295 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qk9nv"] Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.516729 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk9nv"] Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.516929 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.611856 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-catalog-content\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.611912 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-utilities\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.611967 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5dc6\" (UniqueName: \"kubernetes.io/projected/32bd89b3-881a-4a57-bc82-4f3ecec31abd-kube-api-access-c5dc6\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.712793 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-catalog-content\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.712845 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-utilities\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.712877 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c5dc6\" (UniqueName: \"kubernetes.io/projected/32bd89b3-881a-4a57-bc82-4f3ecec31abd-kube-api-access-c5dc6\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.714424 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-catalog-content\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.714652 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-utilities\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.742985 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5dc6\" (UniqueName: \"kubernetes.io/projected/32bd89b3-881a-4a57-bc82-4f3ecec31abd-kube-api-access-c5dc6\") pod \"redhat-marketplace-qk9nv\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:17 crc kubenswrapper[5116]: I1208 17:54:17.835805 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:18 crc kubenswrapper[5116]: I1208 17:54:18.259730 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk9nv"] Dec 08 17:54:18 crc kubenswrapper[5116]: W1208 17:54:18.269573 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32bd89b3_881a_4a57_bc82_4f3ecec31abd.slice/crio-16fa688d8509d85eccb55ea84cac536e4d149b3b807495d0ecfe6c0b0f003566 WatchSource:0}: Error finding container 16fa688d8509d85eccb55ea84cac536e4d149b3b807495d0ecfe6c0b0f003566: Status 404 returned error can't find the container with id 16fa688d8509d85eccb55ea84cac536e4d149b3b807495d0ecfe6c0b0f003566 Dec 08 17:54:18 crc kubenswrapper[5116]: I1208 17:54:18.795108 5116 generic.go:358] "Generic (PLEG): container finished" podID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerID="eb1c5fb3cbd52ff2e533610b5096e52430bc1a6c9fbd8317b2af46e85fbbd8c9" exitCode=0 Dec 08 17:54:18 crc kubenswrapper[5116]: I1208 17:54:18.795203 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk9nv" event={"ID":"32bd89b3-881a-4a57-bc82-4f3ecec31abd","Type":"ContainerDied","Data":"eb1c5fb3cbd52ff2e533610b5096e52430bc1a6c9fbd8317b2af46e85fbbd8c9"} Dec 08 17:54:18 crc kubenswrapper[5116]: I1208 17:54:18.795236 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk9nv" event={"ID":"32bd89b3-881a-4a57-bc82-4f3ecec31abd","Type":"ContainerStarted","Data":"16fa688d8509d85eccb55ea84cac536e4d149b3b807495d0ecfe6c0b0f003566"} Dec 08 17:54:19 crc kubenswrapper[5116]: I1208 17:54:19.803739 5116 generic.go:358] "Generic (PLEG): container finished" podID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerID="097ab254865a80b6748fef549adf7dd40162a0b67039eb1f316f7ac6ce1a21ab" exitCode=0 Dec 08 17:54:19 crc kubenswrapper[5116]: I1208 17:54:19.804204 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk9nv" event={"ID":"32bd89b3-881a-4a57-bc82-4f3ecec31abd","Type":"ContainerDied","Data":"097ab254865a80b6748fef549adf7dd40162a0b67039eb1f316f7ac6ce1a21ab"} Dec 08 17:54:20 crc kubenswrapper[5116]: I1208 17:54:20.813417 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk9nv" event={"ID":"32bd89b3-881a-4a57-bc82-4f3ecec31abd","Type":"ContainerStarted","Data":"7078de0906859f80b4dc18cb978ed4c68a0e522498cf14820c62467084544ec9"} Dec 08 17:54:20 crc kubenswrapper[5116]: I1208 17:54:20.832598 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qk9nv" podStartSLOduration=3.306006958 podStartE2EDuration="3.832575915s" podCreationTimestamp="2025-12-08 17:54:17 +0000 UTC" firstStartedPulling="2025-12-08 17:54:18.796954935 +0000 UTC m=+728.594078169" lastFinishedPulling="2025-12-08 17:54:19.323523892 +0000 UTC m=+729.120647126" observedRunningTime="2025-12-08 17:54:20.830925485 +0000 UTC m=+730.628048749" watchObservedRunningTime="2025-12-08 17:54:20.832575915 +0000 UTC m=+730.629699159" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.331216 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cqlf7"] Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.486326 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqlf7"] Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.486729 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.578467 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-catalog-content\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.578541 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rms89\" (UniqueName: \"kubernetes.io/projected/64788338-b2c7-4deb-a66c-c43e5bfac540-kube-api-access-rms89\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.578670 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-utilities\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.679895 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-utilities\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.680020 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-catalog-content\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.680056 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rms89\" (UniqueName: \"kubernetes.io/projected/64788338-b2c7-4deb-a66c-c43e5bfac540-kube-api-access-rms89\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.680533 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-utilities\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.680693 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-catalog-content\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.700969 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rms89\" (UniqueName: \"kubernetes.io/projected/64788338-b2c7-4deb-a66c-c43e5bfac540-kube-api-access-rms89\") pod \"redhat-operators-cqlf7\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.806619 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.839120 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.839359 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.889085 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:27 crc kubenswrapper[5116]: I1208 17:54:27.967610 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:28 crc kubenswrapper[5116]: I1208 17:54:28.280752 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqlf7"] Dec 08 17:54:28 crc kubenswrapper[5116]: I1208 17:54:28.863588 5116 generic.go:358] "Generic (PLEG): container finished" podID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerID="45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21" exitCode=0 Dec 08 17:54:28 crc kubenswrapper[5116]: I1208 17:54:28.863647 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqlf7" event={"ID":"64788338-b2c7-4deb-a66c-c43e5bfac540","Type":"ContainerDied","Data":"45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21"} Dec 08 17:54:28 crc kubenswrapper[5116]: I1208 17:54:28.864133 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqlf7" event={"ID":"64788338-b2c7-4deb-a66c-c43e5bfac540","Type":"ContainerStarted","Data":"3075e5992b0f3d4e28ae49e2791ef7b633a56b6c17d849d296314d5d6c5055b1"} Dec 08 17:54:29 crc kubenswrapper[5116]: I1208 17:54:29.873281 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqlf7" event={"ID":"64788338-b2c7-4deb-a66c-c43e5bfac540","Type":"ContainerStarted","Data":"47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda"} Dec 08 17:54:30 crc kubenswrapper[5116]: I1208 17:54:30.310487 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk9nv"] Dec 08 17:54:30 crc kubenswrapper[5116]: I1208 17:54:30.310959 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qk9nv" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="registry-server" containerID="cri-o://7078de0906859f80b4dc18cb978ed4c68a0e522498cf14820c62467084544ec9" gracePeriod=2 Dec 08 17:54:30 crc kubenswrapper[5116]: I1208 17:54:30.920041 5116 generic.go:358] "Generic (PLEG): container finished" podID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerID="7078de0906859f80b4dc18cb978ed4c68a0e522498cf14820c62467084544ec9" exitCode=0 Dec 08 17:54:30 crc kubenswrapper[5116]: I1208 17:54:30.920120 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk9nv" event={"ID":"32bd89b3-881a-4a57-bc82-4f3ecec31abd","Type":"ContainerDied","Data":"7078de0906859f80b4dc18cb978ed4c68a0e522498cf14820c62467084544ec9"} Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.225416 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.348750 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5dc6\" (UniqueName: \"kubernetes.io/projected/32bd89b3-881a-4a57-bc82-4f3ecec31abd-kube-api-access-c5dc6\") pod \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.348867 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-utilities\") pod \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.348952 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-catalog-content\") pod \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\" (UID: \"32bd89b3-881a-4a57-bc82-4f3ecec31abd\") " Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.356602 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-utilities" (OuterVolumeSpecName: "utilities") pod "32bd89b3-881a-4a57-bc82-4f3ecec31abd" (UID: "32bd89b3-881a-4a57-bc82-4f3ecec31abd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.360501 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bd89b3-881a-4a57-bc82-4f3ecec31abd-kube-api-access-c5dc6" (OuterVolumeSpecName: "kube-api-access-c5dc6") pod "32bd89b3-881a-4a57-bc82-4f3ecec31abd" (UID: "32bd89b3-881a-4a57-bc82-4f3ecec31abd"). InnerVolumeSpecName "kube-api-access-c5dc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.361790 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32bd89b3-881a-4a57-bc82-4f3ecec31abd" (UID: "32bd89b3-881a-4a57-bc82-4f3ecec31abd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.451012 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.451069 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c5dc6\" (UniqueName: \"kubernetes.io/projected/32bd89b3-881a-4a57-bc82-4f3ecec31abd-kube-api-access-c5dc6\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.451080 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bd89b3-881a-4a57-bc82-4f3ecec31abd-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.935329 5116 generic.go:358] "Generic (PLEG): container finished" podID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerID="47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda" exitCode=0 Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.935420 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqlf7" event={"ID":"64788338-b2c7-4deb-a66c-c43e5bfac540","Type":"ContainerDied","Data":"47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda"} Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.939937 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk9nv" event={"ID":"32bd89b3-881a-4a57-bc82-4f3ecec31abd","Type":"ContainerDied","Data":"16fa688d8509d85eccb55ea84cac536e4d149b3b807495d0ecfe6c0b0f003566"} Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.940070 5116 scope.go:117] "RemoveContainer" containerID="7078de0906859f80b4dc18cb978ed4c68a0e522498cf14820c62467084544ec9" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.939986 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk9nv" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.963921 5116 scope.go:117] "RemoveContainer" containerID="097ab254865a80b6748fef549adf7dd40162a0b67039eb1f316f7ac6ce1a21ab" Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.977335 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk9nv"] Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.982033 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk9nv"] Dec 08 17:54:32 crc kubenswrapper[5116]: I1208 17:54:32.985450 5116 scope.go:117] "RemoveContainer" containerID="eb1c5fb3cbd52ff2e533610b5096e52430bc1a6c9fbd8317b2af46e85fbbd8c9" Dec 08 17:54:33 crc kubenswrapper[5116]: I1208 17:54:33.948277 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqlf7" event={"ID":"64788338-b2c7-4deb-a66c-c43e5bfac540","Type":"ContainerStarted","Data":"a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23"} Dec 08 17:54:33 crc kubenswrapper[5116]: I1208 17:54:33.981671 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cqlf7" podStartSLOduration=6.270053117 podStartE2EDuration="6.98164423s" podCreationTimestamp="2025-12-08 17:54:27 +0000 UTC" firstStartedPulling="2025-12-08 17:54:28.864392228 +0000 UTC m=+738.661515462" lastFinishedPulling="2025-12-08 17:54:29.575983341 +0000 UTC m=+739.373106575" observedRunningTime="2025-12-08 17:54:33.975399218 +0000 UTC m=+743.772522452" watchObservedRunningTime="2025-12-08 17:54:33.98164423 +0000 UTC m=+743.778769404" Dec 08 17:54:34 crc kubenswrapper[5116]: I1208 17:54:34.686546 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" path="/var/lib/kubelet/pods/32bd89b3-881a-4a57-bc82-4f3ecec31abd/volumes" Dec 08 17:54:37 crc kubenswrapper[5116]: I1208 17:54:37.807278 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:37 crc kubenswrapper[5116]: I1208 17:54:37.807686 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:37 crc kubenswrapper[5116]: I1208 17:54:37.864604 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:40 crc kubenswrapper[5116]: I1208 17:54:40.760338 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m8bmb" Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.015153 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.062924 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqlf7"] Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.063207 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cqlf7" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="registry-server" containerID="cri-o://a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23" gracePeriod=2 Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.955465 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.972184 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-utilities\") pod \"64788338-b2c7-4deb-a66c-c43e5bfac540\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.972359 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rms89\" (UniqueName: \"kubernetes.io/projected/64788338-b2c7-4deb-a66c-c43e5bfac540-kube-api-access-rms89\") pod \"64788338-b2c7-4deb-a66c-c43e5bfac540\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.972430 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-catalog-content\") pod \"64788338-b2c7-4deb-a66c-c43e5bfac540\" (UID: \"64788338-b2c7-4deb-a66c-c43e5bfac540\") " Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.974462 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-utilities" (OuterVolumeSpecName: "utilities") pod "64788338-b2c7-4deb-a66c-c43e5bfac540" (UID: "64788338-b2c7-4deb-a66c-c43e5bfac540"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:48 crc kubenswrapper[5116]: I1208 17:54:48.980586 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64788338-b2c7-4deb-a66c-c43e5bfac540-kube-api-access-rms89" (OuterVolumeSpecName: "kube-api-access-rms89") pod "64788338-b2c7-4deb-a66c-c43e5bfac540" (UID: "64788338-b2c7-4deb-a66c-c43e5bfac540"). InnerVolumeSpecName "kube-api-access-rms89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.045158 5116 generic.go:358] "Generic (PLEG): container finished" podID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerID="a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23" exitCode=0 Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.045275 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqlf7" event={"ID":"64788338-b2c7-4deb-a66c-c43e5bfac540","Type":"ContainerDied","Data":"a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23"} Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.045302 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqlf7" event={"ID":"64788338-b2c7-4deb-a66c-c43e5bfac540","Type":"ContainerDied","Data":"3075e5992b0f3d4e28ae49e2791ef7b633a56b6c17d849d296314d5d6c5055b1"} Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.045318 5116 scope.go:117] "RemoveContainer" containerID="a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.045450 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqlf7" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.062698 5116 scope.go:117] "RemoveContainer" containerID="47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.073738 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.073772 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rms89\" (UniqueName: \"kubernetes.io/projected/64788338-b2c7-4deb-a66c-c43e5bfac540-kube-api-access-rms89\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.080993 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64788338-b2c7-4deb-a66c-c43e5bfac540" (UID: "64788338-b2c7-4deb-a66c-c43e5bfac540"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.084865 5116 scope.go:117] "RemoveContainer" containerID="45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.100133 5116 scope.go:117] "RemoveContainer" containerID="a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23" Dec 08 17:54:49 crc kubenswrapper[5116]: E1208 17:54:49.100595 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23\": container with ID starting with a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23 not found: ID does not exist" containerID="a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.100629 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23"} err="failed to get container status \"a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23\": rpc error: code = NotFound desc = could not find container \"a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23\": container with ID starting with a2c90bf0daa07cd2c1ad9365b7914364c087c4675fee1d56d6d629e98c623a23 not found: ID does not exist" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.100651 5116 scope.go:117] "RemoveContainer" containerID="47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda" Dec 08 17:54:49 crc kubenswrapper[5116]: E1208 17:54:49.101078 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda\": container with ID starting with 47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda not found: ID does not exist" containerID="47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.101130 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda"} err="failed to get container status \"47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda\": rpc error: code = NotFound desc = could not find container \"47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda\": container with ID starting with 47cb88036a285b75d23956ca9ce7a32b8945d4de94d99be668c8faf5d3a5ebda not found: ID does not exist" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.101163 5116 scope.go:117] "RemoveContainer" containerID="45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21" Dec 08 17:54:49 crc kubenswrapper[5116]: E1208 17:54:49.101540 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21\": container with ID starting with 45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21 not found: ID does not exist" containerID="45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.101567 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21"} err="failed to get container status \"45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21\": rpc error: code = NotFound desc = could not find container \"45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21\": container with ID starting with 45392d91b77b1c62a62f59612f23e760ed65e35217dee74ac2664a675f0d7c21 not found: ID does not exist" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.174913 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64788338-b2c7-4deb-a66c-c43e5bfac540-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.377359 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqlf7"] Dec 08 17:54:49 crc kubenswrapper[5116]: I1208 17:54:49.380682 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cqlf7"] Dec 08 17:54:50 crc kubenswrapper[5116]: I1208 17:54:50.688950 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" path="/var/lib/kubelet/pods/64788338-b2c7-4deb-a66c-c43e5bfac540/volumes" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.351574 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpndr"] Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.352964 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="registry-server" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.352998 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="registry-server" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353017 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="extract-content" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353025 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="extract-content" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353042 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="extract-utilities" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353054 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="extract-utilities" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353066 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="extract-utilities" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353073 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="extract-utilities" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353085 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="extract-content" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353093 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="extract-content" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353128 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="registry-server" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353137 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="registry-server" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353273 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="64788338-b2c7-4deb-a66c-c43e5bfac540" containerName="registry-server" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.353293 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="32bd89b3-881a-4a57-bc82-4f3ecec31abd" containerName="registry-server" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.377325 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpndr"] Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.377508 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.441445 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-utilities\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.441738 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-catalog-content\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.441927 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqfw5\" (UniqueName: \"kubernetes.io/projected/357896b4-e592-4182-85c6-043e0ba8d4d4-kube-api-access-pqfw5\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.543191 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-catalog-content\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.543320 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pqfw5\" (UniqueName: \"kubernetes.io/projected/357896b4-e592-4182-85c6-043e0ba8d4d4-kube-api-access-pqfw5\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.543374 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-utilities\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.544218 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-utilities\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.544598 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-catalog-content\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.563404 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqfw5\" (UniqueName: \"kubernetes.io/projected/357896b4-e592-4182-85c6-043e0ba8d4d4-kube-api-access-pqfw5\") pod \"community-operators-vpndr\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.696135 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:00 crc kubenswrapper[5116]: I1208 17:55:00.950994 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpndr"] Dec 08 17:55:01 crc kubenswrapper[5116]: I1208 17:55:01.127068 5116 generic.go:358] "Generic (PLEG): container finished" podID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerID="1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568" exitCode=0 Dec 08 17:55:01 crc kubenswrapper[5116]: I1208 17:55:01.127174 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpndr" event={"ID":"357896b4-e592-4182-85c6-043e0ba8d4d4","Type":"ContainerDied","Data":"1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568"} Dec 08 17:55:01 crc kubenswrapper[5116]: I1208 17:55:01.127587 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpndr" event={"ID":"357896b4-e592-4182-85c6-043e0ba8d4d4","Type":"ContainerStarted","Data":"43930ce126290b942781f97876c5c6047faf83cc924d66958efad8e6984367f7"} Dec 08 17:55:02 crc kubenswrapper[5116]: I1208 17:55:02.138070 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpndr" event={"ID":"357896b4-e592-4182-85c6-043e0ba8d4d4","Type":"ContainerStarted","Data":"f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b"} Dec 08 17:55:03 crc kubenswrapper[5116]: I1208 17:55:03.146953 5116 generic.go:358] "Generic (PLEG): container finished" podID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerID="f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b" exitCode=0 Dec 08 17:55:03 crc kubenswrapper[5116]: I1208 17:55:03.147009 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpndr" event={"ID":"357896b4-e592-4182-85c6-043e0ba8d4d4","Type":"ContainerDied","Data":"f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b"} Dec 08 17:55:03 crc kubenswrapper[5116]: I1208 17:55:03.335586 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:55:03 crc kubenswrapper[5116]: I1208 17:55:03.335710 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:55:04 crc kubenswrapper[5116]: I1208 17:55:04.154982 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpndr" event={"ID":"357896b4-e592-4182-85c6-043e0ba8d4d4","Type":"ContainerStarted","Data":"b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938"} Dec 08 17:55:04 crc kubenswrapper[5116]: I1208 17:55:04.175212 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpndr" podStartSLOduration=3.476891582 podStartE2EDuration="4.175137851s" podCreationTimestamp="2025-12-08 17:55:00 +0000 UTC" firstStartedPulling="2025-12-08 17:55:01.128098911 +0000 UTC m=+770.925222155" lastFinishedPulling="2025-12-08 17:55:01.82634517 +0000 UTC m=+771.623468424" observedRunningTime="2025-12-08 17:55:04.173935111 +0000 UTC m=+773.971058345" watchObservedRunningTime="2025-12-08 17:55:04.175137851 +0000 UTC m=+773.972261095" Dec 08 17:55:10 crc kubenswrapper[5116]: I1208 17:55:10.696852 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:10 crc kubenswrapper[5116]: I1208 17:55:10.697570 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:10 crc kubenswrapper[5116]: I1208 17:55:10.736515 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:11 crc kubenswrapper[5116]: I1208 17:55:11.268733 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:11 crc kubenswrapper[5116]: I1208 17:55:11.326463 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpndr"] Dec 08 17:55:13 crc kubenswrapper[5116]: I1208 17:55:13.242140 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpndr" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="registry-server" containerID="cri-o://b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938" gracePeriod=2 Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.092928 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.241163 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-catalog-content\") pod \"357896b4-e592-4182-85c6-043e0ba8d4d4\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.241225 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-utilities\") pod \"357896b4-e592-4182-85c6-043e0ba8d4d4\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.241273 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqfw5\" (UniqueName: \"kubernetes.io/projected/357896b4-e592-4182-85c6-043e0ba8d4d4-kube-api-access-pqfw5\") pod \"357896b4-e592-4182-85c6-043e0ba8d4d4\" (UID: \"357896b4-e592-4182-85c6-043e0ba8d4d4\") " Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.244860 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-utilities" (OuterVolumeSpecName: "utilities") pod "357896b4-e592-4182-85c6-043e0ba8d4d4" (UID: "357896b4-e592-4182-85c6-043e0ba8d4d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.248373 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357896b4-e592-4182-85c6-043e0ba8d4d4-kube-api-access-pqfw5" (OuterVolumeSpecName: "kube-api-access-pqfw5") pod "357896b4-e592-4182-85c6-043e0ba8d4d4" (UID: "357896b4-e592-4182-85c6-043e0ba8d4d4"). InnerVolumeSpecName "kube-api-access-pqfw5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.251501 5116 generic.go:358] "Generic (PLEG): container finished" podID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerID="b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938" exitCode=0 Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.251540 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpndr" event={"ID":"357896b4-e592-4182-85c6-043e0ba8d4d4","Type":"ContainerDied","Data":"b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938"} Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.251605 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpndr" event={"ID":"357896b4-e592-4182-85c6-043e0ba8d4d4","Type":"ContainerDied","Data":"43930ce126290b942781f97876c5c6047faf83cc924d66958efad8e6984367f7"} Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.251629 5116 scope.go:117] "RemoveContainer" containerID="b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.251637 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpndr" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.272413 5116 scope.go:117] "RemoveContainer" containerID="f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.288711 5116 scope.go:117] "RemoveContainer" containerID="1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.294019 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "357896b4-e592-4182-85c6-043e0ba8d4d4" (UID: "357896b4-e592-4182-85c6-043e0ba8d4d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.302644 5116 scope.go:117] "RemoveContainer" containerID="b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938" Dec 08 17:55:14 crc kubenswrapper[5116]: E1208 17:55:14.303219 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938\": container with ID starting with b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938 not found: ID does not exist" containerID="b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.303291 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938"} err="failed to get container status \"b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938\": rpc error: code = NotFound desc = could not find container \"b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938\": container with ID starting with b75ff6c2e900c3938f76295644bab7be837f3bd1a95cd55d571011fa60539938 not found: ID does not exist" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.303318 5116 scope.go:117] "RemoveContainer" containerID="f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b" Dec 08 17:55:14 crc kubenswrapper[5116]: E1208 17:55:14.303755 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b\": container with ID starting with f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b not found: ID does not exist" containerID="f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.303820 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b"} err="failed to get container status \"f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b\": rpc error: code = NotFound desc = could not find container \"f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b\": container with ID starting with f884211af2aef22d41b56ae5727899fed04eb02d4c834e5f22afeed6212eb62b not found: ID does not exist" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.303854 5116 scope.go:117] "RemoveContainer" containerID="1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568" Dec 08 17:55:14 crc kubenswrapper[5116]: E1208 17:55:14.304171 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568\": container with ID starting with 1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568 not found: ID does not exist" containerID="1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.304260 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568"} err="failed to get container status \"1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568\": rpc error: code = NotFound desc = could not find container \"1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568\": container with ID starting with 1cccae54ddef7d231cb171b7e75496bf294215d7651c5a2c6f172b1e5a7b9568 not found: ID does not exist" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.342628 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.342691 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357896b4-e592-4182-85c6-043e0ba8d4d4-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.342701 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pqfw5\" (UniqueName: \"kubernetes.io/projected/357896b4-e592-4182-85c6-043e0ba8d4d4-kube-api-access-pqfw5\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.593610 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpndr"] Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.599166 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpndr"] Dec 08 17:55:14 crc kubenswrapper[5116]: I1208 17:55:14.688530 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" path="/var/lib/kubelet/pods/357896b4-e592-4182-85c6-043e0ba8d4d4/volumes" Dec 08 17:55:17 crc kubenswrapper[5116]: I1208 17:55:17.967090 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvv8x"] Dec 08 17:55:17 crc kubenswrapper[5116]: I1208 17:55:17.967511 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvv8x" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="registry-server" containerID="cri-o://6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46" gracePeriod=30 Dec 08 17:55:18 crc kubenswrapper[5116]: E1208 17:55:18.192890 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46 is running failed: container process not found" containerID="6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:55:18 crc kubenswrapper[5116]: E1208 17:55:18.193335 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46 is running failed: container process not found" containerID="6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:55:18 crc kubenswrapper[5116]: E1208 17:55:18.193726 5116 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46 is running failed: container process not found" containerID="6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:55:18 crc kubenswrapper[5116]: E1208 17:55:18.193864 5116 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-rvv8x" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="registry-server" probeResult="unknown" Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.279505 5116 generic.go:358] "Generic (PLEG): container finished" podID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerID="6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46" exitCode=0 Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.279576 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvv8x" event={"ID":"d51a780b-e856-4552-aa49-7f7b4b654d7e","Type":"ContainerDied","Data":"6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46"} Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.324728 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.496067 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh94c\" (UniqueName: \"kubernetes.io/projected/d51a780b-e856-4552-aa49-7f7b4b654d7e-kube-api-access-fh94c\") pod \"d51a780b-e856-4552-aa49-7f7b4b654d7e\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.496166 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-catalog-content\") pod \"d51a780b-e856-4552-aa49-7f7b4b654d7e\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.496320 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-utilities\") pod \"d51a780b-e856-4552-aa49-7f7b4b654d7e\" (UID: \"d51a780b-e856-4552-aa49-7f7b4b654d7e\") " Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.498067 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-utilities" (OuterVolumeSpecName: "utilities") pod "d51a780b-e856-4552-aa49-7f7b4b654d7e" (UID: "d51a780b-e856-4552-aa49-7f7b4b654d7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.503933 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51a780b-e856-4552-aa49-7f7b4b654d7e-kube-api-access-fh94c" (OuterVolumeSpecName: "kube-api-access-fh94c") pod "d51a780b-e856-4552-aa49-7f7b4b654d7e" (UID: "d51a780b-e856-4552-aa49-7f7b4b654d7e"). InnerVolumeSpecName "kube-api-access-fh94c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.508358 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d51a780b-e856-4552-aa49-7f7b4b654d7e" (UID: "d51a780b-e856-4552-aa49-7f7b4b654d7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.598183 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fh94c\" (UniqueName: \"kubernetes.io/projected/d51a780b-e856-4552-aa49-7f7b4b654d7e-kube-api-access-fh94c\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.598234 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:18 crc kubenswrapper[5116]: I1208 17:55:18.598299 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51a780b-e856-4552-aa49-7f7b4b654d7e-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017207 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-z5k8s"] Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017905 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="extract-content" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017923 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="extract-content" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017945 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="extract-utilities" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017951 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="extract-utilities" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017960 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="registry-server" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017968 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="registry-server" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017976 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="extract-utilities" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017982 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="extract-utilities" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017994 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="extract-content" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.017999 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="extract-content" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.018009 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="registry-server" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.018015 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="registry-server" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.018099 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="357896b4-e592-4182-85c6-043e0ba8d4d4" containerName="registry-server" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.018109 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" containerName="registry-server" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.026505 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.068985 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-z5k8s"] Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.205967 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1a28882-decc-46b6-b056-b94d4a82bf68-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.206028 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1a28882-decc-46b6-b056-b94d4a82bf68-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.206077 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.206110 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-bound-sa-token\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.206136 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1a28882-decc-46b6-b056-b94d4a82bf68-registry-certificates\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.206201 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1a28882-decc-46b6-b056-b94d4a82bf68-trusted-ca\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.206224 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-registry-tls\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.206265 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjzc7\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-kube-api-access-qjzc7\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.226359 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.288353 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvv8x" event={"ID":"d51a780b-e856-4552-aa49-7f7b4b654d7e","Type":"ContainerDied","Data":"26722a9b4dd4cca5cd9ac23a778e7c63f42c66cf92cfebbdeefb82a7f400677a"} Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.288422 5116 scope.go:117] "RemoveContainer" containerID="6968f3414929686a0913da4e4e4886304a95a6b74b577a139c613c522fc16f46" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.288795 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvv8x" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.307109 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-registry-tls\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.307169 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjzc7\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-kube-api-access-qjzc7\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.307264 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1a28882-decc-46b6-b056-b94d4a82bf68-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.307294 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1a28882-decc-46b6-b056-b94d4a82bf68-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.307322 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-bound-sa-token\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.307341 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1a28882-decc-46b6-b056-b94d4a82bf68-registry-certificates\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.307368 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1a28882-decc-46b6-b056-b94d4a82bf68-trusted-ca\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.310799 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1a28882-decc-46b6-b056-b94d4a82bf68-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.311082 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1a28882-decc-46b6-b056-b94d4a82bf68-trusted-ca\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.311225 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1a28882-decc-46b6-b056-b94d4a82bf68-registry-certificates\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.314736 5116 scope.go:117] "RemoveContainer" containerID="923e1c19e62a615ac3e5982bd7e5000edbfd497094f88587208ed226244af50a" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.315483 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1a28882-decc-46b6-b056-b94d4a82bf68-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.316468 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-registry-tls\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.325995 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjzc7\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-kube-api-access-qjzc7\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.327417 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1a28882-decc-46b6-b056-b94d4a82bf68-bound-sa-token\") pod \"image-registry-5d9d95bf5b-z5k8s\" (UID: \"e1a28882-decc-46b6-b056-b94d4a82bf68\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.330386 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvv8x"] Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.336942 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvv8x"] Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.356339 5116 scope.go:117] "RemoveContainer" containerID="d77bf4bac4889eac5f9fe2802fa5c933ea8b67aa328dba7263de35f5cfd543b5" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.378600 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:19 crc kubenswrapper[5116]: I1208 17:55:19.585824 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-z5k8s"] Dec 08 17:55:20 crc kubenswrapper[5116]: I1208 17:55:20.296066 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" event={"ID":"e1a28882-decc-46b6-b056-b94d4a82bf68","Type":"ContainerStarted","Data":"17ee1c71215ce9888a631b1bad4fa0a8086fc87f856956e93f4ff8915d5d2d6f"} Dec 08 17:55:20 crc kubenswrapper[5116]: I1208 17:55:20.297441 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" event={"ID":"e1a28882-decc-46b6-b056-b94d4a82bf68","Type":"ContainerStarted","Data":"95594f61b18ac649e60a44983e7c3b54ee68e784c4e5c7bda4212b0d202d9a9c"} Dec 08 17:55:20 crc kubenswrapper[5116]: I1208 17:55:20.297523 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:20 crc kubenswrapper[5116]: I1208 17:55:20.311984 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" podStartSLOduration=1.3119531150000001 podStartE2EDuration="1.311953115s" podCreationTimestamp="2025-12-08 17:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:55:20.311539894 +0000 UTC m=+790.108663128" watchObservedRunningTime="2025-12-08 17:55:20.311953115 +0000 UTC m=+790.109076369" Dec 08 17:55:20 crc kubenswrapper[5116]: I1208 17:55:20.688511 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51a780b-e856-4552-aa49-7f7b4b654d7e" path="/var/lib/kubelet/pods/d51a780b-e856-4552-aa49-7f7b4b654d7e/volumes" Dec 08 17:55:21 crc kubenswrapper[5116]: I1208 17:55:21.848376 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc"] Dec 08 17:55:21 crc kubenswrapper[5116]: I1208 17:55:21.872690 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc"] Dec 08 17:55:21 crc kubenswrapper[5116]: I1208 17:55:21.873108 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:21 crc kubenswrapper[5116]: I1208 17:55:21.880958 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.049346 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.049427 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.049476 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrxc\" (UniqueName: \"kubernetes.io/projected/4d97bf00-ebfd-456e-b079-ed0655d8feec-kube-api-access-wfrxc\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.151086 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.151146 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.151177 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wfrxc\" (UniqueName: \"kubernetes.io/projected/4d97bf00-ebfd-456e-b079-ed0655d8feec-kube-api-access-wfrxc\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.151727 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.151854 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.173559 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfrxc\" (UniqueName: \"kubernetes.io/projected/4d97bf00-ebfd-456e-b079-ed0655d8feec-kube-api-access-wfrxc\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.203108 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:22 crc kubenswrapper[5116]: I1208 17:55:22.406299 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc"] Dec 08 17:55:22 crc kubenswrapper[5116]: W1208 17:55:22.417459 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d97bf00_ebfd_456e_b079_ed0655d8feec.slice/crio-a707660e65f9e85e56d66948fc855eee095dae642341aa2894145d361fe00fa9 WatchSource:0}: Error finding container a707660e65f9e85e56d66948fc855eee095dae642341aa2894145d361fe00fa9: Status 404 returned error can't find the container with id a707660e65f9e85e56d66948fc855eee095dae642341aa2894145d361fe00fa9 Dec 08 17:55:23 crc kubenswrapper[5116]: I1208 17:55:23.313551 5116 generic.go:358] "Generic (PLEG): container finished" podID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerID="298a235f4428298a206516623f7fabf973765ea30184561e8b6552db7001d111" exitCode=0 Dec 08 17:55:23 crc kubenswrapper[5116]: I1208 17:55:23.313750 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" event={"ID":"4d97bf00-ebfd-456e-b079-ed0655d8feec","Type":"ContainerDied","Data":"298a235f4428298a206516623f7fabf973765ea30184561e8b6552db7001d111"} Dec 08 17:55:23 crc kubenswrapper[5116]: I1208 17:55:23.313856 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" event={"ID":"4d97bf00-ebfd-456e-b079-ed0655d8feec","Type":"ContainerStarted","Data":"a707660e65f9e85e56d66948fc855eee095dae642341aa2894145d361fe00fa9"} Dec 08 17:55:25 crc kubenswrapper[5116]: I1208 17:55:25.329591 5116 generic.go:358] "Generic (PLEG): container finished" podID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerID="68c572ae29751d5e200b44cb290156835403dca7f6998deaab28aef85fdf5802" exitCode=0 Dec 08 17:55:25 crc kubenswrapper[5116]: I1208 17:55:25.329659 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" event={"ID":"4d97bf00-ebfd-456e-b079-ed0655d8feec","Type":"ContainerDied","Data":"68c572ae29751d5e200b44cb290156835403dca7f6998deaab28aef85fdf5802"} Dec 08 17:55:26 crc kubenswrapper[5116]: I1208 17:55:26.342806 5116 generic.go:358] "Generic (PLEG): container finished" podID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerID="3bc8852e4658dfb341434aed9aaed4e93eb9aea15a9af0a2332fe1070369df1b" exitCode=0 Dec 08 17:55:26 crc kubenswrapper[5116]: I1208 17:55:26.342850 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" event={"ID":"4d97bf00-ebfd-456e-b079-ed0655d8feec","Type":"ContainerDied","Data":"3bc8852e4658dfb341434aed9aaed4e93eb9aea15a9af0a2332fe1070369df1b"} Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.630063 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.638686 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-bundle\") pod \"4d97bf00-ebfd-456e-b079-ed0655d8feec\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.638847 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfrxc\" (UniqueName: \"kubernetes.io/projected/4d97bf00-ebfd-456e-b079-ed0655d8feec-kube-api-access-wfrxc\") pod \"4d97bf00-ebfd-456e-b079-ed0655d8feec\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.638910 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-util\") pod \"4d97bf00-ebfd-456e-b079-ed0655d8feec\" (UID: \"4d97bf00-ebfd-456e-b079-ed0655d8feec\") " Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.642461 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-bundle" (OuterVolumeSpecName: "bundle") pod "4d97bf00-ebfd-456e-b079-ed0655d8feec" (UID: "4d97bf00-ebfd-456e-b079-ed0655d8feec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.645257 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d97bf00-ebfd-456e-b079-ed0655d8feec-kube-api-access-wfrxc" (OuterVolumeSpecName: "kube-api-access-wfrxc") pod "4d97bf00-ebfd-456e-b079-ed0655d8feec" (UID: "4d97bf00-ebfd-456e-b079-ed0655d8feec"). InnerVolumeSpecName "kube-api-access-wfrxc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.661654 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-util" (OuterVolumeSpecName: "util") pod "4d97bf00-ebfd-456e-b079-ed0655d8feec" (UID: "4d97bf00-ebfd-456e-b079-ed0655d8feec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.740672 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.740720 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wfrxc\" (UniqueName: \"kubernetes.io/projected/4d97bf00-ebfd-456e-b079-ed0655d8feec-kube-api-access-wfrxc\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:27 crc kubenswrapper[5116]: I1208 17:55:27.740733 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d97bf00-ebfd-456e-b079-ed0655d8feec-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.042989 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2"] Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.044068 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerName="extract" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.044098 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerName="extract" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.044122 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerName="pull" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.044130 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerName="pull" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.044151 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerName="util" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.044159 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerName="util" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.044311 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="4d97bf00-ebfd-456e-b079-ed0655d8feec" containerName="extract" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.052292 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2"] Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.052521 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.147117 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.147204 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtzs\" (UniqueName: \"kubernetes.io/projected/edeb3fcc-dc53-42c3-a87b-4729c2521788-kube-api-access-5vtzs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.147278 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.248657 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.248950 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.249015 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5vtzs\" (UniqueName: \"kubernetes.io/projected/edeb3fcc-dc53-42c3-a87b-4729c2521788-kube-api-access-5vtzs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.249426 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.249440 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.269004 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vtzs\" (UniqueName: \"kubernetes.io/projected/edeb3fcc-dc53-42c3-a87b-4729c2521788-kube-api-access-5vtzs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.357730 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" event={"ID":"4d97bf00-ebfd-456e-b079-ed0655d8feec","Type":"ContainerDied","Data":"a707660e65f9e85e56d66948fc855eee095dae642341aa2894145d361fe00fa9"} Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.357815 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a707660e65f9e85e56d66948fc855eee095dae642341aa2894145d361fe00fa9" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.357814 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ng8mc" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.371717 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:28 crc kubenswrapper[5116]: I1208 17:55:28.793152 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2"] Dec 08 17:55:28 crc kubenswrapper[5116]: W1208 17:55:28.803110 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedeb3fcc_dc53_42c3_a87b_4729c2521788.slice/crio-8e42f6c4935efd9dc283bf3e5606749456ea375271da8a7ded4d8356bd8effe0 WatchSource:0}: Error finding container 8e42f6c4935efd9dc283bf3e5606749456ea375271da8a7ded4d8356bd8effe0: Status 404 returned error can't find the container with id 8e42f6c4935efd9dc283bf3e5606749456ea375271da8a7ded4d8356bd8effe0 Dec 08 17:55:29 crc kubenswrapper[5116]: I1208 17:55:29.365260 5116 generic.go:358] "Generic (PLEG): container finished" podID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerID="cb4cb3b05b87dd077f69c67580e45102ead4fdb91ce6e87eb63e98d23a1188c4" exitCode=0 Dec 08 17:55:29 crc kubenswrapper[5116]: I1208 17:55:29.365357 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" event={"ID":"edeb3fcc-dc53-42c3-a87b-4729c2521788","Type":"ContainerDied","Data":"cb4cb3b05b87dd077f69c67580e45102ead4fdb91ce6e87eb63e98d23a1188c4"} Dec 08 17:55:29 crc kubenswrapper[5116]: I1208 17:55:29.365385 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" event={"ID":"edeb3fcc-dc53-42c3-a87b-4729c2521788","Type":"ContainerStarted","Data":"8e42f6c4935efd9dc283bf3e5606749456ea375271da8a7ded4d8356bd8effe0"} Dec 08 17:55:30 crc kubenswrapper[5116]: I1208 17:55:30.373024 5116 generic.go:358] "Generic (PLEG): container finished" podID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerID="e90a212a60bbc9b3ca46acb8f02511bc9ab2d047b20bb890797b760d00e74d76" exitCode=0 Dec 08 17:55:30 crc kubenswrapper[5116]: I1208 17:55:30.373096 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" event={"ID":"edeb3fcc-dc53-42c3-a87b-4729c2521788","Type":"ContainerDied","Data":"e90a212a60bbc9b3ca46acb8f02511bc9ab2d047b20bb890797b760d00e74d76"} Dec 08 17:55:31 crc kubenswrapper[5116]: I1208 17:55:31.400538 5116 generic.go:358] "Generic (PLEG): container finished" podID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerID="32dfba8c86824ecbca8352c92adb55a139463b8d73e718dd5946f8db7983d4d3" exitCode=0 Dec 08 17:55:31 crc kubenswrapper[5116]: I1208 17:55:31.400692 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" event={"ID":"edeb3fcc-dc53-42c3-a87b-4729c2521788","Type":"ContainerDied","Data":"32dfba8c86824ecbca8352c92adb55a139463b8d73e718dd5946f8db7983d4d3"} Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.169284 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs"] Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.190593 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs"] Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.190746 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.207400 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ktdp"] Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.212927 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.236606 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ktdp"] Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.307545 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-catalog-content\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.307617 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-utilities\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.307851 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwftv\" (UniqueName: \"kubernetes.io/projected/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-kube-api-access-hwftv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.307916 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48k7j\" (UniqueName: \"kubernetes.io/projected/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-kube-api-access-48k7j\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.307937 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.308042 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.409654 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hwftv\" (UniqueName: \"kubernetes.io/projected/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-kube-api-access-hwftv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.409709 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-48k7j\" (UniqueName: \"kubernetes.io/projected/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-kube-api-access-48k7j\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.409734 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.409765 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.409888 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-catalog-content\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.410635 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-catalog-content\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.410683 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-utilities\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.410752 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.410756 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.411014 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-utilities\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.735767 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwftv\" (UniqueName: \"kubernetes.io/projected/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-kube-api-access-hwftv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.736857 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-48k7j\" (UniqueName: \"kubernetes.io/projected/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-kube-api-access-48k7j\") pod \"certified-operators-2ktdp\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.817959 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:32 crc kubenswrapper[5116]: I1208 17:55:32.838800 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:33 crc kubenswrapper[5116]: I1208 17:55:33.334930 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:55:33 crc kubenswrapper[5116]: I1208 17:55:33.335362 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:55:33 crc kubenswrapper[5116]: I1208 17:55:33.633606 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.019903 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ktdp"] Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.021526 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-bundle\") pod \"edeb3fcc-dc53-42c3-a87b-4729c2521788\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.021840 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-util\") pod \"edeb3fcc-dc53-42c3-a87b-4729c2521788\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.021900 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vtzs\" (UniqueName: \"kubernetes.io/projected/edeb3fcc-dc53-42c3-a87b-4729c2521788-kube-api-access-5vtzs\") pod \"edeb3fcc-dc53-42c3-a87b-4729c2521788\" (UID: \"edeb3fcc-dc53-42c3-a87b-4729c2521788\") " Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.027467 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-bundle" (OuterVolumeSpecName: "bundle") pod "edeb3fcc-dc53-42c3-a87b-4729c2521788" (UID: "edeb3fcc-dc53-42c3-a87b-4729c2521788"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.038955 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edeb3fcc-dc53-42c3-a87b-4729c2521788-kube-api-access-5vtzs" (OuterVolumeSpecName: "kube-api-access-5vtzs") pod "edeb3fcc-dc53-42c3-a87b-4729c2521788" (UID: "edeb3fcc-dc53-42c3-a87b-4729c2521788"). InnerVolumeSpecName "kube-api-access-5vtzs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:34 crc kubenswrapper[5116]: W1208 17:55:34.069891 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d8456fc_a61a_415b_b2a1_4f7255b01fa3.slice/crio-b53e7d02b679c2f023115ae814e0d4f0c77479a3de7757fc0600be27a4a72e44 WatchSource:0}: Error finding container b53e7d02b679c2f023115ae814e0d4f0c77479a3de7757fc0600be27a4a72e44: Status 404 returned error can't find the container with id b53e7d02b679c2f023115ae814e0d4f0c77479a3de7757fc0600be27a4a72e44 Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.070394 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs"] Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.126849 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vtzs\" (UniqueName: \"kubernetes.io/projected/edeb3fcc-dc53-42c3-a87b-4729c2521788-kube-api-access-5vtzs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.126906 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.283833 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-util" (OuterVolumeSpecName: "util") pod "edeb3fcc-dc53-42c3-a87b-4729c2521788" (UID: "edeb3fcc-dc53-42c3-a87b-4729c2521788"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.328751 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/edeb3fcc-dc53-42c3-a87b-4729c2521788-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.543862 5116 generic.go:358] "Generic (PLEG): container finished" podID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerID="50dd2ec4ecee71475a91c1792e4dd0dcbbb66b820b88e19a95f2a680a1e3d8e2" exitCode=0 Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.544000 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ktdp" event={"ID":"8d8456fc-a61a-415b-b2a1-4f7255b01fa3","Type":"ContainerDied","Data":"50dd2ec4ecee71475a91c1792e4dd0dcbbb66b820b88e19a95f2a680a1e3d8e2"} Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.544029 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ktdp" event={"ID":"8d8456fc-a61a-415b-b2a1-4f7255b01fa3","Type":"ContainerStarted","Data":"b53e7d02b679c2f023115ae814e0d4f0c77479a3de7757fc0600be27a4a72e44"} Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.547916 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" event={"ID":"edeb3fcc-dc53-42c3-a87b-4729c2521788","Type":"ContainerDied","Data":"8e42f6c4935efd9dc283bf3e5606749456ea375271da8a7ded4d8356bd8effe0"} Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.548220 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e42f6c4935efd9dc283bf3e5606749456ea375271da8a7ded4d8356bd8effe0" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.548364 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ewzwd2" Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.556757 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" event={"ID":"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b","Type":"ContainerStarted","Data":"ee9b917639cc790904986e894c0dea9e816c9465fe471f6bc572275628e3473d"} Dec 08 17:55:34 crc kubenswrapper[5116]: I1208 17:55:34.556814 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" event={"ID":"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b","Type":"ContainerStarted","Data":"8d5db3d9641c705aa6bad5b70a00d3bcb41a91f1951a45c8c5bfe74ab595ef33"} Dec 08 17:55:35 crc kubenswrapper[5116]: I1208 17:55:35.563826 5116 generic.go:358] "Generic (PLEG): container finished" podID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerID="ee9b917639cc790904986e894c0dea9e816c9465fe471f6bc572275628e3473d" exitCode=0 Dec 08 17:55:35 crc kubenswrapper[5116]: I1208 17:55:35.564049 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" event={"ID":"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b","Type":"ContainerDied","Data":"ee9b917639cc790904986e894c0dea9e816c9465fe471f6bc572275628e3473d"} Dec 08 17:55:35 crc kubenswrapper[5116]: I1208 17:55:35.567173 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ktdp" event={"ID":"8d8456fc-a61a-415b-b2a1-4f7255b01fa3","Type":"ContainerStarted","Data":"d18c569cd0973c2301b9e39de6f34a98da4c57428db545ea31936711563dd636"} Dec 08 17:55:36 crc kubenswrapper[5116]: I1208 17:55:36.577957 5116 generic.go:358] "Generic (PLEG): container finished" podID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerID="d18c569cd0973c2301b9e39de6f34a98da4c57428db545ea31936711563dd636" exitCode=0 Dec 08 17:55:36 crc kubenswrapper[5116]: I1208 17:55:36.578201 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ktdp" event={"ID":"8d8456fc-a61a-415b-b2a1-4f7255b01fa3","Type":"ContainerDied","Data":"d18c569cd0973c2301b9e39de6f34a98da4c57428db545ea31936711563dd636"} Dec 08 17:55:37 crc kubenswrapper[5116]: I1208 17:55:37.592963 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ktdp" event={"ID":"8d8456fc-a61a-415b-b2a1-4f7255b01fa3","Type":"ContainerStarted","Data":"9a0bbc96d0cd2500e36af63de929ce5bdc591b42a17fc10db7f3aab335aa2e2c"} Dec 08 17:55:37 crc kubenswrapper[5116]: I1208 17:55:37.631219 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ktdp" podStartSLOduration=4.94564833 podStartE2EDuration="5.631198321s" podCreationTimestamp="2025-12-08 17:55:32 +0000 UTC" firstStartedPulling="2025-12-08 17:55:34.545024061 +0000 UTC m=+804.342147295" lastFinishedPulling="2025-12-08 17:55:35.230574052 +0000 UTC m=+805.027697286" observedRunningTime="2025-12-08 17:55:37.629019142 +0000 UTC m=+807.426142386" watchObservedRunningTime="2025-12-08 17:55:37.631198321 +0000 UTC m=+807.428321555" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.349926 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-vwfcf"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.351111 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerName="pull" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.351129 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerName="pull" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.351148 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerName="util" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.351156 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerName="util" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.351185 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerName="extract" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.351192 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerName="extract" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.351341 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="edeb3fcc-dc53-42c3-a87b-4729c2521788" containerName="extract" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.730495 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-vwfcf"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.730640 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.735636 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.736159 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.736836 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-g27dk\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.754335 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cmv\" (UniqueName: \"kubernetes.io/projected/9580fca6-8837-4c17-a2f2-7ff29b31d7d7-kube-api-access-74cmv\") pod \"obo-prometheus-operator-86648f486b-vwfcf\" (UID: \"9580fca6-8837-4c17-a2f2-7ff29b31d7d7\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.809416 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.847729 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-dfb5c5887-n6l5r"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.849103 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.852111 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.852146 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.852157 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.852532 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-dpgbw\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.852826 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.857168 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-74cmv\" (UniqueName: \"kubernetes.io/projected/9580fca6-8837-4c17-a2f2-7ff29b31d7d7-kube-api-access-74cmv\") pod \"obo-prometheus-operator-86648f486b-vwfcf\" (UID: \"9580fca6-8837-4c17-a2f2-7ff29b31d7d7\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.858721 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.859017 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.860396 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.862578 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-dfb5c5887-n6l5r"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.862609 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.862715 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.870924 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-c5s9k\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.888900 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-74cmv\" (UniqueName: \"kubernetes.io/projected/9580fca6-8837-4c17-a2f2-7ff29b31d7d7-kube-api-access-74cmv\") pod \"obo-prometheus-operator-86648f486b-vwfcf\" (UID: \"9580fca6-8837-4c17-a2f2-7ff29b31d7d7\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.908249 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-nbxvs"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.913639 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.918234 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.918491 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-9ct47\"" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.923438 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-nbxvs"] Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.958939 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b1c0211-f4fe-4f3e-ba35-4537f470e6b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn\" (UID: \"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.958990 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b1c0211-f4fe-4f3e-ba35-4537f470e6b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn\" (UID: \"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.959011 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9b3a903-b162-44c2-9dba-a93c2dd8db40-observability-operator-tls\") pod \"observability-operator-78c97476f4-nbxvs\" (UID: \"b9b3a903-b162-44c2-9dba-a93c2dd8db40\") " pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.959053 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a29e1f6-c9e5-4414-8e7e-7a4948580b8e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-xlx56\" (UID: \"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.959070 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cktq4\" (UniqueName: \"kubernetes.io/projected/b48e060d-6717-4aac-9497-a3bcc2982f79-kube-api-access-cktq4\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.959091 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b48e060d-6717-4aac-9497-a3bcc2982f79-webhook-cert\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.959144 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a29e1f6-c9e5-4414-8e7e-7a4948580b8e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-xlx56\" (UID: \"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.959172 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wg4\" (UniqueName: \"kubernetes.io/projected/b9b3a903-b162-44c2-9dba-a93c2dd8db40-kube-api-access-k4wg4\") pod \"observability-operator-78c97476f4-nbxvs\" (UID: \"b9b3a903-b162-44c2-9dba-a93c2dd8db40\") " pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:41 crc kubenswrapper[5116]: I1208 17:55:41.959189 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b48e060d-6717-4aac-9497-a3bcc2982f79-apiservice-cert\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.006710 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cfwmd"] Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.019767 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cfwmd"] Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.019934 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.032280 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-mshgw\"" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060438 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a29e1f6-c9e5-4414-8e7e-7a4948580b8e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-xlx56\" (UID: \"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060498 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cktq4\" (UniqueName: \"kubernetes.io/projected/b48e060d-6717-4aac-9497-a3bcc2982f79-kube-api-access-cktq4\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060546 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b48e060d-6717-4aac-9497-a3bcc2982f79-webhook-cert\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060591 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hbq\" (UniqueName: \"kubernetes.io/projected/c51aec57-1a81-4f4e-bdcf-6e9da302affb-kube-api-access-m5hbq\") pod \"perses-operator-68bdb49cbf-cfwmd\" (UID: \"c51aec57-1a81-4f4e-bdcf-6e9da302affb\") " pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060645 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a29e1f6-c9e5-4414-8e7e-7a4948580b8e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-xlx56\" (UID: \"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060675 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wg4\" (UniqueName: \"kubernetes.io/projected/b9b3a903-b162-44c2-9dba-a93c2dd8db40-kube-api-access-k4wg4\") pod \"observability-operator-78c97476f4-nbxvs\" (UID: \"b9b3a903-b162-44c2-9dba-a93c2dd8db40\") " pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060698 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b48e060d-6717-4aac-9497-a3bcc2982f79-apiservice-cert\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060763 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b1c0211-f4fe-4f3e-ba35-4537f470e6b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn\" (UID: \"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060807 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b1c0211-f4fe-4f3e-ba35-4537f470e6b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn\" (UID: \"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060828 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9b3a903-b162-44c2-9dba-a93c2dd8db40-observability-operator-tls\") pod \"observability-operator-78c97476f4-nbxvs\" (UID: \"b9b3a903-b162-44c2-9dba-a93c2dd8db40\") " pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.060858 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c51aec57-1a81-4f4e-bdcf-6e9da302affb-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cfwmd\" (UID: \"c51aec57-1a81-4f4e-bdcf-6e9da302affb\") " pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.081885 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.082316 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a29e1f6-c9e5-4414-8e7e-7a4948580b8e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-xlx56\" (UID: \"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.082997 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b48e060d-6717-4aac-9497-a3bcc2982f79-webhook-cert\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.083185 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9b3a903-b162-44c2-9dba-a93c2dd8db40-observability-operator-tls\") pod \"observability-operator-78c97476f4-nbxvs\" (UID: \"b9b3a903-b162-44c2-9dba-a93c2dd8db40\") " pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.083346 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b1c0211-f4fe-4f3e-ba35-4537f470e6b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn\" (UID: \"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.083409 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b48e060d-6717-4aac-9497-a3bcc2982f79-apiservice-cert\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.107036 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b1c0211-f4fe-4f3e-ba35-4537f470e6b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn\" (UID: \"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.107082 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a29e1f6-c9e5-4414-8e7e-7a4948580b8e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8594b6556b-xlx56\" (UID: \"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.116224 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cktq4\" (UniqueName: \"kubernetes.io/projected/b48e060d-6717-4aac-9497-a3bcc2982f79-kube-api-access-cktq4\") pod \"elastic-operator-dfb5c5887-n6l5r\" (UID: \"b48e060d-6717-4aac-9497-a3bcc2982f79\") " pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.138169 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wg4\" (UniqueName: \"kubernetes.io/projected/b9b3a903-b162-44c2-9dba-a93c2dd8db40-kube-api-access-k4wg4\") pod \"observability-operator-78c97476f4-nbxvs\" (UID: \"b9b3a903-b162-44c2-9dba-a93c2dd8db40\") " pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.161930 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c51aec57-1a81-4f4e-bdcf-6e9da302affb-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cfwmd\" (UID: \"c51aec57-1a81-4f4e-bdcf-6e9da302affb\") " pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.162006 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m5hbq\" (UniqueName: \"kubernetes.io/projected/c51aec57-1a81-4f4e-bdcf-6e9da302affb-kube-api-access-m5hbq\") pod \"perses-operator-68bdb49cbf-cfwmd\" (UID: \"c51aec57-1a81-4f4e-bdcf-6e9da302affb\") " pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.163439 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c51aec57-1a81-4f4e-bdcf-6e9da302affb-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-cfwmd\" (UID: \"c51aec57-1a81-4f4e-bdcf-6e9da302affb\") " pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.173422 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.210102 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5hbq\" (UniqueName: \"kubernetes.io/projected/c51aec57-1a81-4f4e-bdcf-6e9da302affb-kube-api-access-m5hbq\") pod \"perses-operator-68bdb49cbf-cfwmd\" (UID: \"c51aec57-1a81-4f4e-bdcf-6e9da302affb\") " pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.217330 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.230972 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.245923 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.336377 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-z5k8s" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.337655 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.423204 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kt94l"] Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.840182 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:42 crc kubenswrapper[5116]: I1208 17:55:42.840574 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.012239 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.162439 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-vwfcf"] Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.284644 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn"] Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.497758 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56"] Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.523699 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-dfb5c5887-n6l5r"] Dec 08 17:55:43 crc kubenswrapper[5116]: W1208 17:55:43.530108 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb48e060d_6717_4aac_9497_a3bcc2982f79.slice/crio-349b68c967e2604978c5e1624da0b9d15d09c6abb0cb1eb2ea07fb56d392ca1a WatchSource:0}: Error finding container 349b68c967e2604978c5e1624da0b9d15d09c6abb0cb1eb2ea07fb56d392ca1a: Status 404 returned error can't find the container with id 349b68c967e2604978c5e1624da0b9d15d09c6abb0cb1eb2ea07fb56d392ca1a Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.687459 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" event={"ID":"b48e060d-6717-4aac-9497-a3bcc2982f79","Type":"ContainerStarted","Data":"349b68c967e2604978c5e1624da0b9d15d09c6abb0cb1eb2ea07fb56d392ca1a"} Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.689416 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" event={"ID":"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1","Type":"ContainerStarted","Data":"7657dc2d1ad6aa4d91f7509229947a8b00aa73bbebbe047e161712bb7c1e23b6"} Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.691935 5116 generic.go:358] "Generic (PLEG): container finished" podID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerID="aaf41ea3e2b7a311a731387294130c82fa1cfaafd4093cff34e080123f984a79" exitCode=0 Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.692006 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" event={"ID":"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b","Type":"ContainerDied","Data":"aaf41ea3e2b7a311a731387294130c82fa1cfaafd4093cff34e080123f984a79"} Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.694796 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" event={"ID":"9580fca6-8837-4c17-a2f2-7ff29b31d7d7","Type":"ContainerStarted","Data":"b98080440d22d611152ddced351fc66198bc77154b79f3951470539af215098d"} Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.697258 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" event={"ID":"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e","Type":"ContainerStarted","Data":"512df4528166f4dc29ac61690dacc25e9fd73a4b79fa64c84a7a918e818dc497"} Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.931038 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-nbxvs"] Dec 08 17:55:43 crc kubenswrapper[5116]: W1208 17:55:43.943490 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b3a903_b162_44c2_9dba_a93c2dd8db40.slice/crio-3cc2903bd2be8447880c16e49d192abf5540fabb6d875a8dfd4ee3a420419ed4 WatchSource:0}: Error finding container 3cc2903bd2be8447880c16e49d192abf5540fabb6d875a8dfd4ee3a420419ed4: Status 404 returned error can't find the container with id 3cc2903bd2be8447880c16e49d192abf5540fabb6d875a8dfd4ee3a420419ed4 Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.945295 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-cfwmd"] Dec 08 17:55:43 crc kubenswrapper[5116]: W1208 17:55:43.947562 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc51aec57_1a81_4f4e_bdcf_6e9da302affb.slice/crio-de2fca8113d3295b8da0052176381592ba4b0c206efbf168b2b16f88567ec55e WatchSource:0}: Error finding container de2fca8113d3295b8da0052176381592ba4b0c206efbf168b2b16f88567ec55e: Status 404 returned error can't find the container with id de2fca8113d3295b8da0052176381592ba4b0c206efbf168b2b16f88567ec55e Dec 08 17:55:43 crc kubenswrapper[5116]: I1208 17:55:43.951825 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:44 crc kubenswrapper[5116]: I1208 17:55:44.732679 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" event={"ID":"c51aec57-1a81-4f4e-bdcf-6e9da302affb","Type":"ContainerStarted","Data":"de2fca8113d3295b8da0052176381592ba4b0c206efbf168b2b16f88567ec55e"} Dec 08 17:55:44 crc kubenswrapper[5116]: I1208 17:55:44.744281 5116 generic.go:358] "Generic (PLEG): container finished" podID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerID="2c616c4c53127ce78494cc645520cb32b9c7b2143c52c02a8d79de4b878af76b" exitCode=0 Dec 08 17:55:44 crc kubenswrapper[5116]: I1208 17:55:44.744362 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" event={"ID":"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b","Type":"ContainerDied","Data":"2c616c4c53127ce78494cc645520cb32b9c7b2143c52c02a8d79de4b878af76b"} Dec 08 17:55:44 crc kubenswrapper[5116]: I1208 17:55:44.748968 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-nbxvs" event={"ID":"b9b3a903-b162-44c2-9dba-a93c2dd8db40","Type":"ContainerStarted","Data":"3cc2903bd2be8447880c16e49d192abf5540fabb6d875a8dfd4ee3a420419ed4"} Dec 08 17:55:47 crc kubenswrapper[5116]: I1208 17:55:47.392616 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ktdp"] Dec 08 17:55:47 crc kubenswrapper[5116]: I1208 17:55:47.393056 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ktdp" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="registry-server" containerID="cri-o://9a0bbc96d0cd2500e36af63de929ce5bdc591b42a17fc10db7f3aab335aa2e2c" gracePeriod=2 Dec 08 17:55:47 crc kubenswrapper[5116]: I1208 17:55:47.921902 5116 generic.go:358] "Generic (PLEG): container finished" podID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerID="9a0bbc96d0cd2500e36af63de929ce5bdc591b42a17fc10db7f3aab335aa2e2c" exitCode=0 Dec 08 17:55:47 crc kubenswrapper[5116]: I1208 17:55:47.922624 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ktdp" event={"ID":"8d8456fc-a61a-415b-b2a1-4f7255b01fa3","Type":"ContainerDied","Data":"9a0bbc96d0cd2500e36af63de929ce5bdc591b42a17fc10db7f3aab335aa2e2c"} Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.650137 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.658852 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-util\") pod \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.659107 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-bundle\") pod \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.659164 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwftv\" (UniqueName: \"kubernetes.io/projected/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-kube-api-access-hwftv\") pod \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\" (UID: \"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b\") " Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.662179 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-bundle" (OuterVolumeSpecName: "bundle") pod "ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" (UID: "ea585dd0-ad22-42a1-b4ed-b12a9bfe721b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.678135 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-util" (OuterVolumeSpecName: "util") pod "ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" (UID: "ea585dd0-ad22-42a1-b4ed-b12a9bfe721b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.679481 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-kube-api-access-hwftv" (OuterVolumeSpecName: "kube-api-access-hwftv") pod "ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" (UID: "ea585dd0-ad22-42a1-b4ed-b12a9bfe721b"). InnerVolumeSpecName "kube-api-access-hwftv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.761144 5116 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.761180 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hwftv\" (UniqueName: \"kubernetes.io/projected/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-kube-api-access-hwftv\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.761194 5116 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea585dd0-ad22-42a1-b4ed-b12a9bfe721b-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.982318 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" event={"ID":"ea585dd0-ad22-42a1-b4ed-b12a9bfe721b","Type":"ContainerDied","Data":"8d5db3d9641c705aa6bad5b70a00d3bcb41a91f1951a45c8c5bfe74ab595ef33"} Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.982390 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d5db3d9641c705aa6bad5b70a00d3bcb41a91f1951a45c8c5bfe74ab595ef33" Dec 08 17:55:49 crc kubenswrapper[5116]: I1208 17:55:49.982510 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5mmzs" Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.027203 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.068916 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-catalog-content\") pod \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.069003 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-utilities\") pod \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.069130 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48k7j\" (UniqueName: \"kubernetes.io/projected/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-kube-api-access-48k7j\") pod \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\" (UID: \"8d8456fc-a61a-415b-b2a1-4f7255b01fa3\") " Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.070626 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-utilities" (OuterVolumeSpecName: "utilities") pod "8d8456fc-a61a-415b-b2a1-4f7255b01fa3" (UID: "8d8456fc-a61a-415b-b2a1-4f7255b01fa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.082215 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-kube-api-access-48k7j" (OuterVolumeSpecName: "kube-api-access-48k7j") pod "8d8456fc-a61a-415b-b2a1-4f7255b01fa3" (UID: "8d8456fc-a61a-415b-b2a1-4f7255b01fa3"). InnerVolumeSpecName "kube-api-access-48k7j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.110836 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d8456fc-a61a-415b-b2a1-4f7255b01fa3" (UID: "8d8456fc-a61a-415b-b2a1-4f7255b01fa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.170915 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.170952 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-48k7j\" (UniqueName: \"kubernetes.io/projected/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-kube-api-access-48k7j\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:50 crc kubenswrapper[5116]: I1208 17:55:50.171910 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8456fc-a61a-415b-b2a1-4f7255b01fa3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:51 crc kubenswrapper[5116]: I1208 17:55:51.080150 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ktdp" event={"ID":"8d8456fc-a61a-415b-b2a1-4f7255b01fa3","Type":"ContainerDied","Data":"b53e7d02b679c2f023115ae814e0d4f0c77479a3de7757fc0600be27a4a72e44"} Dec 08 17:55:51 crc kubenswrapper[5116]: I1208 17:55:51.080274 5116 scope.go:117] "RemoveContainer" containerID="9a0bbc96d0cd2500e36af63de929ce5bdc591b42a17fc10db7f3aab335aa2e2c" Dec 08 17:55:51 crc kubenswrapper[5116]: I1208 17:55:51.080559 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ktdp" Dec 08 17:55:51 crc kubenswrapper[5116]: I1208 17:55:51.118402 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ktdp"] Dec 08 17:55:51 crc kubenswrapper[5116]: I1208 17:55:51.125632 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ktdp"] Dec 08 17:55:52 crc kubenswrapper[5116]: I1208 17:55:52.687887 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" path="/var/lib/kubelet/pods/8d8456fc-a61a-415b-b2a1-4f7255b01fa3/volumes" Dec 08 17:55:53 crc kubenswrapper[5116]: I1208 17:55:53.518152 5116 scope.go:117] "RemoveContainer" containerID="d18c569cd0973c2301b9e39de6f34a98da4c57428db545ea31936711563dd636" Dec 08 17:55:53 crc kubenswrapper[5116]: I1208 17:55:53.611788 5116 scope.go:117] "RemoveContainer" containerID="50dd2ec4ecee71475a91c1792e4dd0dcbbb66b820b88e19a95f2a680a1e3d8e2" Dec 08 17:55:54 crc kubenswrapper[5116]: I1208 17:55:54.134304 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" event={"ID":"b48e060d-6717-4aac-9497-a3bcc2982f79","Type":"ContainerStarted","Data":"e258dc50adbf1426aab9c473c731aca003e82a887eac32cbad6b8bcea7f39ffc"} Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.018259 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-dfb5c5887-n6l5r" podStartSLOduration=8.939246394 podStartE2EDuration="19.018215278s" podCreationTimestamp="2025-12-08 17:55:41 +0000 UTC" firstStartedPulling="2025-12-08 17:55:43.532926345 +0000 UTC m=+813.330049579" lastFinishedPulling="2025-12-08 17:55:53.611895229 +0000 UTC m=+823.409018463" observedRunningTime="2025-12-08 17:55:54.245547002 +0000 UTC m=+824.042670246" watchObservedRunningTime="2025-12-08 17:56:00.018215278 +0000 UTC m=+829.815338522" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.019306 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c"] Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020076 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="registry-server" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020110 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="registry-server" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020124 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="extract-utilities" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020134 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="extract-utilities" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020146 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerName="util" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020153 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerName="util" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020165 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerName="extract" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020174 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerName="extract" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020192 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="extract-content" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020198 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="extract-content" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020223 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerName="pull" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020230 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerName="pull" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020410 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea585dd0-ad22-42a1-b4ed-b12a9bfe721b" containerName="extract" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.020428 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d8456fc-a61a-415b-b2a1-4f7255b01fa3" containerName="registry-server" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.396606 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c"] Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.396890 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.403171 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.403667 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.403916 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-dgtld\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.464304 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwq4\" (UniqueName: \"kubernetes.io/projected/b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3-kube-api-access-mwwq4\") pod \"cert-manager-operator-controller-manager-64c74584c4-hdw8c\" (UID: \"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.464388 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-hdw8c\" (UID: \"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.503371 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.515359 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.566301 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwwq4\" (UniqueName: \"kubernetes.io/projected/b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3-kube-api-access-mwwq4\") pod \"cert-manager-operator-controller-manager-64c74584c4-hdw8c\" (UID: \"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.566379 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-hdw8c\" (UID: \"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.567068 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-hdw8c\" (UID: \"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.667799 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.667848 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.667869 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.667906 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.667937 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668013 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668043 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668064 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/aedcd631-92d4-43b5-bea3-b25e0687dcc5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668081 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668106 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668170 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668187 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668233 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668269 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.668319 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.769955 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770059 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770105 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770142 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770180 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770208 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770233 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770304 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770352 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770380 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770433 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770459 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/aedcd631-92d4-43b5-bea3-b25e0687dcc5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770485 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770521 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.770601 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.771293 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.771545 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.771920 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.772329 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.772735 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.772997 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aedcd631-92d4-43b5-bea3-b25e0687dcc5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.800486 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/aedcd631-92d4-43b5-bea3-b25e0687dcc5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.800537 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.803289 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.803410 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.804952 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-l6dd8\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.805167 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.805365 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.805547 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.809603 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.811961 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.813674 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.814546 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.824796 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.824800 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.825776 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.829191 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.837751 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.842429 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/aedcd631-92d4-43b5-bea3-b25e0687dcc5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"aedcd631-92d4-43b5-bea3-b25e0687dcc5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.847125 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.851744 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:00 crc kubenswrapper[5116]: I1208 17:56:00.908592 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwwq4\" (UniqueName: \"kubernetes.io/projected/b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3-kube-api-access-mwwq4\") pod \"cert-manager-operator-controller-manager-64c74584c4-hdw8c\" (UID: \"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:01 crc kubenswrapper[5116]: I1208 17:56:01.019895 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" Dec 08 17:56:03 crc kubenswrapper[5116]: I1208 17:56:03.335553 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:56:03 crc kubenswrapper[5116]: I1208 17:56:03.335985 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:56:03 crc kubenswrapper[5116]: I1208 17:56:03.336059 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:56:03 crc kubenswrapper[5116]: I1208 17:56:03.336836 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8c9ea73a3f3a6aeb913be43880595d7b2a74416932fa51f8351d035f08e4a16"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:56:03 crc kubenswrapper[5116]: I1208 17:56:03.336909 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://a8c9ea73a3f3a6aeb913be43880595d7b2a74416932fa51f8351d035f08e4a16" gracePeriod=600 Dec 08 17:56:05 crc kubenswrapper[5116]: I1208 17:56:05.561021 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="a8c9ea73a3f3a6aeb913be43880595d7b2a74416932fa51f8351d035f08e4a16" exitCode=0 Dec 08 17:56:05 crc kubenswrapper[5116]: I1208 17:56:05.561099 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"a8c9ea73a3f3a6aeb913be43880595d7b2a74416932fa51f8351d035f08e4a16"} Dec 08 17:56:05 crc kubenswrapper[5116]: I1208 17:56:05.561548 5116 scope.go:117] "RemoveContainer" containerID="59b8c5ef8a713cbaa73c58d18697f0e38bfd14c8ab9516c06d59c3d9022ca4ac" Dec 08 17:56:07 crc kubenswrapper[5116]: I1208 17:56:07.522901 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" podUID="83ea28a4-865d-4cee-aaa2-7adcccfba4a2" containerName="registry" containerID="cri-o://c2b3625d21c4386406d248c280fd04fea16bd3f16d82a3d1e8526bad45d5bb71" gracePeriod=30 Dec 08 17:56:08 crc kubenswrapper[5116]: I1208 17:56:08.584007 5116 generic.go:358] "Generic (PLEG): container finished" podID="83ea28a4-865d-4cee-aaa2-7adcccfba4a2" containerID="c2b3625d21c4386406d248c280fd04fea16bd3f16d82a3d1e8526bad45d5bb71" exitCode=0 Dec 08 17:56:08 crc kubenswrapper[5116]: I1208 17:56:08.584064 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" event={"ID":"83ea28a4-865d-4cee-aaa2-7adcccfba4a2","Type":"ContainerDied","Data":"c2b3625d21c4386406d248c280fd04fea16bd3f16d82a3d1e8526bad45d5bb71"} Dec 08 17:56:11 crc kubenswrapper[5116]: I1208 17:56:11.675549 5116 patch_prober.go:28] interesting pod/image-registry-66587d64c8-kt94l container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.18:5000/healthz\": dial tcp 10.217.0.18:5000: connect: connection refused" start-of-body= Dec 08 17:56:11 crc kubenswrapper[5116]: I1208 17:56:11.675926 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" podUID="83ea28a4-865d-4cee-aaa2-7adcccfba4a2" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.18:5000/healthz\": dial tcp 10.217.0.18:5000: connect: connection refused" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.046038 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.135612 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-bound-sa-token\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.136073 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-installation-pull-secrets\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.136096 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-tls\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.137726 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.137841 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-ca-trust-extracted\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.137973 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-certificates\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.138095 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-trusted-ca\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.138996 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.139074 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.139145 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfsdr\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-kube-api-access-wfsdr\") pod \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\" (UID: \"83ea28a4-865d-4cee-aaa2-7adcccfba4a2\") " Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.139739 5116 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.139760 5116 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.157500 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.161527 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.162718 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.171708 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.171862 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-kube-api-access-wfsdr" (OuterVolumeSpecName: "kube-api-access-wfsdr") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "kube-api-access-wfsdr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.174417 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "83ea28a4-865d-4cee-aaa2-7adcccfba4a2" (UID: "83ea28a4-865d-4cee-aaa2-7adcccfba4a2"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.242079 5116 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.242321 5116 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.242331 5116 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.242344 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wfsdr\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-kube-api-access-wfsdr\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.242353 5116 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83ea28a4-865d-4cee-aaa2-7adcccfba4a2-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.427018 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c"] Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.641161 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" event={"ID":"83ea28a4-865d-4cee-aaa2-7adcccfba4a2","Type":"ContainerDied","Data":"73d3b452b364e7f22c40a473ee8cad121c9a0ed3d536051a00c04368cd6e6b80"} Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.641278 5116 scope.go:117] "RemoveContainer" containerID="c2b3625d21c4386406d248c280fd04fea16bd3f16d82a3d1e8526bad45d5bb71" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.641636 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kt94l" Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.659642 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"268f636f479af3afc0e2d68841471d42bc01301ec891f325abd7818c7af95e59"} Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.662523 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" event={"ID":"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3","Type":"ContainerStarted","Data":"2bd4544336225a705e9c8dcfa345b3a0db422dd544b7d2f0fb8fc11c1d0f6758"} Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.728195 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kt94l"] Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.738422 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kt94l"] Dec 08 17:56:14 crc kubenswrapper[5116]: I1208 17:56:14.813810 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.679650 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" event={"ID":"1b1c0211-f4fe-4f3e-ba35-4537f470e6b1","Type":"ContainerStarted","Data":"de5f91d9dbdcfd0695e2af3424ee33b465ce7a6717ed4b42af43c921204eae0c"} Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.688126 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" event={"ID":"c51aec57-1a81-4f4e-bdcf-6e9da302affb","Type":"ContainerStarted","Data":"368203604ece2727f074be31163232fb792e125faf99f3b996f959dcd43531b8"} Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.688850 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.690949 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" event={"ID":"9580fca6-8837-4c17-a2f2-7ff29b31d7d7","Type":"ContainerStarted","Data":"ee3aabe0e943e375f749a3c2c548f910be28c5b714a421e336b5beb24c0badb3"} Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.696226 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-nbxvs" event={"ID":"b9b3a903-b162-44c2-9dba-a93c2dd8db40","Type":"ContainerStarted","Data":"b0d67b2f58f6d252b829d2457e7601fa5845f48acb746b64ffccda3a68230da8"} Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.697027 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.706059 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-8k8tn" podStartSLOduration=3.99520614 podStartE2EDuration="34.706034989s" podCreationTimestamp="2025-12-08 17:55:41 +0000 UTC" firstStartedPulling="2025-12-08 17:55:43.302872457 +0000 UTC m=+813.099995691" lastFinishedPulling="2025-12-08 17:56:14.013701296 +0000 UTC m=+843.810824540" observedRunningTime="2025-12-08 17:56:15.706031278 +0000 UTC m=+845.503154512" watchObservedRunningTime="2025-12-08 17:56:15.706034989 +0000 UTC m=+845.503158223" Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.708039 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" event={"ID":"6a29e1f6-c9e5-4414-8e7e-7a4948580b8e","Type":"ContainerStarted","Data":"e71eb766f73e7f09136132b582ca8cf8fbe57c3bbe2098d04ff0a7bd0d449a9b"} Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.722140 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"aedcd631-92d4-43b5-bea3-b25e0687dcc5","Type":"ContainerStarted","Data":"1633b9fd58ddd7dbb1fbbe3e652bdba5e0580739ea382ecdc14966de9fe4a366"} Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.741822 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" podStartSLOduration=4.597964074 podStartE2EDuration="34.741797168s" podCreationTimestamp="2025-12-08 17:55:41 +0000 UTC" firstStartedPulling="2025-12-08 17:55:43.956191043 +0000 UTC m=+813.753314277" lastFinishedPulling="2025-12-08 17:56:14.100024147 +0000 UTC m=+843.897147371" observedRunningTime="2025-12-08 17:56:15.737859253 +0000 UTC m=+845.534982497" watchObservedRunningTime="2025-12-08 17:56:15.741797168 +0000 UTC m=+845.538920402" Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.767673 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-nbxvs" podStartSLOduration=4.673539822 podStartE2EDuration="34.767656315s" podCreationTimestamp="2025-12-08 17:55:41 +0000 UTC" firstStartedPulling="2025-12-08 17:55:43.946544088 +0000 UTC m=+813.743667332" lastFinishedPulling="2025-12-08 17:56:14.040660591 +0000 UTC m=+843.837783825" observedRunningTime="2025-12-08 17:56:15.766559015 +0000 UTC m=+845.563682259" watchObservedRunningTime="2025-12-08 17:56:15.767656315 +0000 UTC m=+845.564779539" Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.820907 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8594b6556b-xlx56" podStartSLOduration=4.136135361 podStartE2EDuration="34.820869307s" podCreationTimestamp="2025-12-08 17:55:41 +0000 UTC" firstStartedPulling="2025-12-08 17:55:43.503954766 +0000 UTC m=+813.301078000" lastFinishedPulling="2025-12-08 17:56:14.188688712 +0000 UTC m=+843.985811946" observedRunningTime="2025-12-08 17:56:15.813926803 +0000 UTC m=+845.611050047" watchObservedRunningTime="2025-12-08 17:56:15.820869307 +0000 UTC m=+845.617992541" Dec 08 17:56:15 crc kubenswrapper[5116]: I1208 17:56:15.823813 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-vwfcf" podStartSLOduration=3.9896311620000002 podStartE2EDuration="34.823768514s" podCreationTimestamp="2025-12-08 17:55:41 +0000 UTC" firstStartedPulling="2025-12-08 17:55:43.179530883 +0000 UTC m=+812.976654117" lastFinishedPulling="2025-12-08 17:56:14.013668235 +0000 UTC m=+843.810791469" observedRunningTime="2025-12-08 17:56:15.791551369 +0000 UTC m=+845.588674603" watchObservedRunningTime="2025-12-08 17:56:15.823768514 +0000 UTC m=+845.620891758" Dec 08 17:56:16 crc kubenswrapper[5116]: I1208 17:56:16.177361 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-nbxvs" Dec 08 17:56:16 crc kubenswrapper[5116]: I1208 17:56:16.688934 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ea28a4-865d-4cee-aaa2-7adcccfba4a2" path="/var/lib/kubelet/pods/83ea28a4-865d-4cee-aaa2-7adcccfba4a2/volumes" Dec 08 17:56:22 crc kubenswrapper[5116]: I1208 17:56:22.927832 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" event={"ID":"b1eeeb05-d274-4ebd-bdf6-2159bcf65ec3","Type":"ContainerStarted","Data":"5804db842149c508f439223141ca8e70ba0e8829cc1ed2175e7962a30fb7d290"} Dec 08 17:56:22 crc kubenswrapper[5116]: I1208 17:56:22.953848 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-hdw8c" podStartSLOduration=16.19609247 podStartE2EDuration="23.953826141s" podCreationTimestamp="2025-12-08 17:55:59 +0000 UTC" firstStartedPulling="2025-12-08 17:56:14.501397984 +0000 UTC m=+844.298521218" lastFinishedPulling="2025-12-08 17:56:22.259131655 +0000 UTC m=+852.056254889" observedRunningTime="2025-12-08 17:56:22.953736179 +0000 UTC m=+852.750859423" watchObservedRunningTime="2025-12-08 17:56:22.953826141 +0000 UTC m=+852.750949375" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.760102 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w"] Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.761637 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83ea28a4-865d-4cee-aaa2-7adcccfba4a2" containerName="registry" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.761666 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea28a4-865d-4cee-aaa2-7adcccfba4a2" containerName="registry" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.761802 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="83ea28a4-865d-4cee-aaa2-7adcccfba4a2" containerName="registry" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.765439 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.773991 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w"] Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.776087 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.776698 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-r5npz\"" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.776709 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.860146 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af8f318a-c8e4-46d7-8cbc-2606038ba7d6-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-jvn8w\" (UID: \"af8f318a-c8e4-46d7-8cbc-2606038ba7d6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.860265 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2xh\" (UniqueName: \"kubernetes.io/projected/af8f318a-c8e4-46d7-8cbc-2606038ba7d6-kube-api-access-9g2xh\") pod \"cert-manager-webhook-7894b5b9b4-jvn8w\" (UID: \"af8f318a-c8e4-46d7-8cbc-2606038ba7d6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.962307 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af8f318a-c8e4-46d7-8cbc-2606038ba7d6-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-jvn8w\" (UID: \"af8f318a-c8e4-46d7-8cbc-2606038ba7d6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:24 crc kubenswrapper[5116]: I1208 17:56:24.962371 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2xh\" (UniqueName: \"kubernetes.io/projected/af8f318a-c8e4-46d7-8cbc-2606038ba7d6-kube-api-access-9g2xh\") pod \"cert-manager-webhook-7894b5b9b4-jvn8w\" (UID: \"af8f318a-c8e4-46d7-8cbc-2606038ba7d6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:25 crc kubenswrapper[5116]: I1208 17:56:25.016514 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af8f318a-c8e4-46d7-8cbc-2606038ba7d6-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-jvn8w\" (UID: \"af8f318a-c8e4-46d7-8cbc-2606038ba7d6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:25 crc kubenswrapper[5116]: I1208 17:56:25.017717 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2xh\" (UniqueName: \"kubernetes.io/projected/af8f318a-c8e4-46d7-8cbc-2606038ba7d6-kube-api-access-9g2xh\") pod \"cert-manager-webhook-7894b5b9b4-jvn8w\" (UID: \"af8f318a-c8e4-46d7-8cbc-2606038ba7d6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:25 crc kubenswrapper[5116]: I1208 17:56:25.258931 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.325062 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r"] Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.520169 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r"] Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.520357 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.522885 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-qkzkm\"" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.602897 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9fsh\" (UniqueName: \"kubernetes.io/projected/e4bffed4-db06-443d-b001-fbc23ed74358-kube-api-access-c9fsh\") pod \"cert-manager-cainjector-7dbf76d5c8-2rp9r\" (UID: \"e4bffed4-db06-443d-b001-fbc23ed74358\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.603072 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4bffed4-db06-443d-b001-fbc23ed74358-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2rp9r\" (UID: \"e4bffed4-db06-443d-b001-fbc23ed74358\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.704892 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4bffed4-db06-443d-b001-fbc23ed74358-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2rp9r\" (UID: \"e4bffed4-db06-443d-b001-fbc23ed74358\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.705027 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c9fsh\" (UniqueName: \"kubernetes.io/projected/e4bffed4-db06-443d-b001-fbc23ed74358-kube-api-access-c9fsh\") pod \"cert-manager-cainjector-7dbf76d5c8-2rp9r\" (UID: \"e4bffed4-db06-443d-b001-fbc23ed74358\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.726712 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9fsh\" (UniqueName: \"kubernetes.io/projected/e4bffed4-db06-443d-b001-fbc23ed74358-kube-api-access-c9fsh\") pod \"cert-manager-cainjector-7dbf76d5c8-2rp9r\" (UID: \"e4bffed4-db06-443d-b001-fbc23ed74358\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.728069 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4bffed4-db06-443d-b001-fbc23ed74358-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2rp9r\" (UID: \"e4bffed4-db06-443d-b001-fbc23ed74358\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.757727 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-cfwmd" Dec 08 17:56:27 crc kubenswrapper[5116]: I1208 17:56:27.842784 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" Dec 08 17:56:36 crc kubenswrapper[5116]: I1208 17:56:36.260466 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r"] Dec 08 17:56:36 crc kubenswrapper[5116]: I1208 17:56:36.570265 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w"] Dec 08 17:56:36 crc kubenswrapper[5116]: W1208 17:56:36.588480 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8f318a_c8e4_46d7_8cbc_2606038ba7d6.slice/crio-f94b303ed25f75f8dd69f36d2d8e52b06c8114238048d6843d123dcde1354798 WatchSource:0}: Error finding container f94b303ed25f75f8dd69f36d2d8e52b06c8114238048d6843d123dcde1354798: Status 404 returned error can't find the container with id f94b303ed25f75f8dd69f36d2d8e52b06c8114238048d6843d123dcde1354798 Dec 08 17:56:37 crc kubenswrapper[5116]: I1208 17:56:37.253301 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"aedcd631-92d4-43b5-bea3-b25e0687dcc5","Type":"ContainerStarted","Data":"cd1703b9a792293b8f16a21412624dfc096ba98da6b1e2decda11fe21eb3e69f"} Dec 08 17:56:37 crc kubenswrapper[5116]: I1208 17:56:37.256671 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" event={"ID":"e4bffed4-db06-443d-b001-fbc23ed74358","Type":"ContainerStarted","Data":"7b4dde711d46a379b4a358d325a96b0621394cecbdec488a7431efa427ce6d5e"} Dec 08 17:56:37 crc kubenswrapper[5116]: I1208 17:56:37.258489 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" event={"ID":"af8f318a-c8e4-46d7-8cbc-2606038ba7d6","Type":"ContainerStarted","Data":"f94b303ed25f75f8dd69f36d2d8e52b06c8114238048d6843d123dcde1354798"} Dec 08 17:56:37 crc kubenswrapper[5116]: I1208 17:56:37.445926 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:56:37 crc kubenswrapper[5116]: I1208 17:56:37.477426 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:56:39 crc kubenswrapper[5116]: I1208 17:56:39.273263 5116 generic.go:358] "Generic (PLEG): container finished" podID="aedcd631-92d4-43b5-bea3-b25e0687dcc5" containerID="cd1703b9a792293b8f16a21412624dfc096ba98da6b1e2decda11fe21eb3e69f" exitCode=0 Dec 08 17:56:39 crc kubenswrapper[5116]: I1208 17:56:39.273443 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"aedcd631-92d4-43b5-bea3-b25e0687dcc5","Type":"ContainerDied","Data":"cd1703b9a792293b8f16a21412624dfc096ba98da6b1e2decda11fe21eb3e69f"} Dec 08 17:56:43 crc kubenswrapper[5116]: I1208 17:56:43.321459 5116 generic.go:358] "Generic (PLEG): container finished" podID="aedcd631-92d4-43b5-bea3-b25e0687dcc5" containerID="dda6e8ff6568e0c774b92b5cf680ce712e080f35aa1be16287b6388a23c52665" exitCode=0 Dec 08 17:56:43 crc kubenswrapper[5116]: I1208 17:56:43.321513 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"aedcd631-92d4-43b5-bea3-b25e0687dcc5","Type":"ContainerDied","Data":"dda6e8ff6568e0c774b92b5cf680ce712e080f35aa1be16287b6388a23c52665"} Dec 08 17:56:44 crc kubenswrapper[5116]: I1208 17:56:44.585062 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-qcxzb"] Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.158956 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.161784 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-5lzqp\"" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.169895 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-qcxzb"] Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.182532 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpshv\" (UniqueName: \"kubernetes.io/projected/68448d0b-9482-41bd-8a0a-d2acb7df8648-kube-api-access-jpshv\") pod \"cert-manager-858d87f86b-qcxzb\" (UID: \"68448d0b-9482-41bd-8a0a-d2acb7df8648\") " pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.182655 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68448d0b-9482-41bd-8a0a-d2acb7df8648-bound-sa-token\") pod \"cert-manager-858d87f86b-qcxzb\" (UID: \"68448d0b-9482-41bd-8a0a-d2acb7df8648\") " pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.284700 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpshv\" (UniqueName: \"kubernetes.io/projected/68448d0b-9482-41bd-8a0a-d2acb7df8648-kube-api-access-jpshv\") pod \"cert-manager-858d87f86b-qcxzb\" (UID: \"68448d0b-9482-41bd-8a0a-d2acb7df8648\") " pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.284893 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68448d0b-9482-41bd-8a0a-d2acb7df8648-bound-sa-token\") pod \"cert-manager-858d87f86b-qcxzb\" (UID: \"68448d0b-9482-41bd-8a0a-d2acb7df8648\") " pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.311061 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpshv\" (UniqueName: \"kubernetes.io/projected/68448d0b-9482-41bd-8a0a-d2acb7df8648-kube-api-access-jpshv\") pod \"cert-manager-858d87f86b-qcxzb\" (UID: \"68448d0b-9482-41bd-8a0a-d2acb7df8648\") " pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.317742 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68448d0b-9482-41bd-8a0a-d2acb7df8648-bound-sa-token\") pod \"cert-manager-858d87f86b-qcxzb\" (UID: \"68448d0b-9482-41bd-8a0a-d2acb7df8648\") " pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:45 crc kubenswrapper[5116]: I1208 17:56:45.487851 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-qcxzb" Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.299601 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-qcxzb"] Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.401651 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" event={"ID":"af8f318a-c8e4-46d7-8cbc-2606038ba7d6","Type":"ContainerStarted","Data":"cf2ce42c3b39644da0cd92bfbacb4e98f92b62d00a1ad234f3274ef724afbea7"} Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.402291 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.404400 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-qcxzb" event={"ID":"68448d0b-9482-41bd-8a0a-d2acb7df8648","Type":"ContainerStarted","Data":"2c1e2cfb813d91d0a623b168c3e9637e0877b183c60b204a162269d7e5e6faaf"} Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.409599 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"aedcd631-92d4-43b5-bea3-b25e0687dcc5","Type":"ContainerStarted","Data":"31a093e6d0db02035d2c5f21689b8b00fe209fd4ee32ab238bfddb6453c5d49a"} Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.410594 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.412586 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" event={"ID":"e4bffed4-db06-443d-b001-fbc23ed74358","Type":"ContainerStarted","Data":"c9a5df7c8ab18673987a4838cb555e5846b73f13ee5cf57692f6b5460e90723f"} Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.425215 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" podStartSLOduration=13.262849012 podStartE2EDuration="23.425193245s" podCreationTimestamp="2025-12-08 17:56:24 +0000 UTC" firstStartedPulling="2025-12-08 17:56:36.591540972 +0000 UTC m=+866.388664206" lastFinishedPulling="2025-12-08 17:56:46.753885215 +0000 UTC m=+876.551008439" observedRunningTime="2025-12-08 17:56:47.419367441 +0000 UTC m=+877.216490695" watchObservedRunningTime="2025-12-08 17:56:47.425193245 +0000 UTC m=+877.222316479" Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.499947 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2rp9r" podStartSLOduration=10.026175708 podStartE2EDuration="20.499928956s" podCreationTimestamp="2025-12-08 17:56:27 +0000 UTC" firstStartedPulling="2025-12-08 17:56:36.268884581 +0000 UTC m=+866.066007815" lastFinishedPulling="2025-12-08 17:56:46.742637829 +0000 UTC m=+876.539761063" observedRunningTime="2025-12-08 17:56:47.442793689 +0000 UTC m=+877.239916943" watchObservedRunningTime="2025-12-08 17:56:47.499928956 +0000 UTC m=+877.297052190" Dec 08 17:56:47 crc kubenswrapper[5116]: I1208 17:56:47.501689 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=26.059073491 podStartE2EDuration="47.501681362s" podCreationTimestamp="2025-12-08 17:56:00 +0000 UTC" firstStartedPulling="2025-12-08 17:56:14.824561515 +0000 UTC m=+844.621684749" lastFinishedPulling="2025-12-08 17:56:36.267169386 +0000 UTC m=+866.064292620" observedRunningTime="2025-12-08 17:56:47.497298917 +0000 UTC m=+877.294422161" watchObservedRunningTime="2025-12-08 17:56:47.501681362 +0000 UTC m=+877.298804596" Dec 08 17:56:48 crc kubenswrapper[5116]: I1208 17:56:48.420882 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-qcxzb" event={"ID":"68448d0b-9482-41bd-8a0a-d2acb7df8648","Type":"ContainerStarted","Data":"8e9588a028fe5441f8b25862f4c396b93a47b7b3477b79bcc4d660805c63fb3d"} Dec 08 17:56:48 crc kubenswrapper[5116]: I1208 17:56:48.442337 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-qcxzb" podStartSLOduration=4.442308976 podStartE2EDuration="4.442308976s" podCreationTimestamp="2025-12-08 17:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:56:48.436147123 +0000 UTC m=+878.233270357" watchObservedRunningTime="2025-12-08 17:56:48.442308976 +0000 UTC m=+878.239432220" Dec 08 17:56:53 crc kubenswrapper[5116]: I1208 17:56:53.425910 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jvn8w" Dec 08 17:56:55 crc kubenswrapper[5116]: I1208 17:56:55.826299 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.190988 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.191081 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.193115 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.193459 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.193631 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.194710 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-49c42\"" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.197839 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220123 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220176 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220213 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220383 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220507 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220554 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220733 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220785 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220825 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220847 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220880 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.220976 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.221201 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccnd\" (UniqueName: \"kubernetes.io/projected/1497f490-385e-4bd0-94eb-415d4abdb920-kube-api-access-jccnd\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322375 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jccnd\" (UniqueName: \"kubernetes.io/projected/1497f490-385e-4bd0-94eb-415d4abdb920-kube-api-access-jccnd\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322491 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322542 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322579 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322617 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322658 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322690 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322784 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322822 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322873 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322911 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.322972 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.323039 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.324029 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.324603 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.327743 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.328273 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.332680 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.333007 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.333340 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.333526 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.334695 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.334956 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.335192 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.336884 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.400424 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccnd\" (UniqueName: \"kubernetes.io/projected/1497f490-385e-4bd0-94eb-415d4abdb920-kube-api-access-jccnd\") pod \"service-telemetry-framework-index-1-build\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:56 crc kubenswrapper[5116]: I1208 17:56:56.847670 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:56:57 crc kubenswrapper[5116]: I1208 17:56:57.713586 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 17:56:57 crc kubenswrapper[5116]: I1208 17:56:57.984960 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1497f490-385e-4bd0-94eb-415d4abdb920","Type":"ContainerStarted","Data":"8a36963b8f8ce168926e05c9026d20c476d2e8f21069f50675e34ee47db339af"} Dec 08 17:56:58 crc kubenswrapper[5116]: I1208 17:56:58.536833 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="aedcd631-92d4-43b5-bea3-b25e0687dcc5" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:56:58 crc kubenswrapper[5116]: {"timestamp": "2025-12-08T17:56:58+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:56:58 crc kubenswrapper[5116]: > Dec 08 17:57:03 crc kubenswrapper[5116]: I1208 17:57:03.506236 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="aedcd631-92d4-43b5-bea3-b25e0687dcc5" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:57:03 crc kubenswrapper[5116]: {"timestamp": "2025-12-08T17:57:03+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:57:03 crc kubenswrapper[5116]: > Dec 08 17:57:08 crc kubenswrapper[5116]: I1208 17:57:08.069992 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1497f490-385e-4bd0-94eb-415d4abdb920","Type":"ContainerStarted","Data":"2f89fbb24be1732e51878028400637f8ec564ae1624b51fbd9aa1f8a843d73eb"} Dec 08 17:57:08 crc kubenswrapper[5116]: I1208 17:57:08.124783 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39400: no serving certificate available for the kubelet" Dec 08 17:57:08 crc kubenswrapper[5116]: I1208 17:57:08.525719 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="aedcd631-92d4-43b5-bea3-b25e0687dcc5" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:57:08 crc kubenswrapper[5116]: {"timestamp": "2025-12-08T17:57:08+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:57:08 crc kubenswrapper[5116]: > Dec 08 17:57:09 crc kubenswrapper[5116]: I1208 17:57:09.156133 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 17:57:10 crc kubenswrapper[5116]: I1208 17:57:10.083830 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="1497f490-385e-4bd0-94eb-415d4abdb920" containerName="git-clone" containerID="cri-o://2f89fbb24be1732e51878028400637f8ec564ae1624b51fbd9aa1f8a843d73eb" gracePeriod=30 Dec 08 17:57:10 crc kubenswrapper[5116]: I1208 17:57:10.982673 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_1497f490-385e-4bd0-94eb-415d4abdb920/git-clone/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.006089 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_1497f490-385e-4bd0-94eb-415d4abdb920/git-clone/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.008482 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.030549 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.038006 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.054721 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.060955 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.070431 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.091531 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_1497f490-385e-4bd0-94eb-415d4abdb920/git-clone/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.091586 5116 generic.go:358] "Generic (PLEG): container finished" podID="1497f490-385e-4bd0-94eb-415d4abdb920" containerID="2f89fbb24be1732e51878028400637f8ec564ae1624b51fbd9aa1f8a843d73eb" exitCode=1 Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.091687 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1497f490-385e-4bd0-94eb-415d4abdb920","Type":"ContainerDied","Data":"2f89fbb24be1732e51878028400637f8ec564ae1624b51fbd9aa1f8a843d73eb"} Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.661709 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_1497f490-385e-4bd0-94eb-415d4abdb920/git-clone/0.log" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.661786 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668002 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-ca-bundles\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668059 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-system-configs\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668148 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-proxy-ca-bundles\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668178 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jccnd\" (UniqueName: \"kubernetes.io/projected/1497f490-385e-4bd0-94eb-415d4abdb920-kube-api-access-jccnd\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668262 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-buildworkdir\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668309 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-pull\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668339 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-push\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668406 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-build-blob-cache\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668440 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-run\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668464 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-buildcachedir\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668483 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-node-pullsecrets\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668517 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-root\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668582 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"1497f490-385e-4bd0-94eb-415d4abdb920\" (UID: \"1497f490-385e-4bd0-94eb-415d4abdb920\") " Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668658 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668705 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668822 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668852 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.668901 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669185 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669211 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1497f490-385e-4bd0-94eb-415d4abdb920-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669225 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669231 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669254 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669311 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1497f490-385e-4bd0-94eb-415d4abdb920-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669313 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.669444 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.670162 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.675189 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.675291 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-pull" (OuterVolumeSpecName: "builder-dockercfg-49c42-pull") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "builder-dockercfg-49c42-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.675581 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1497f490-385e-4bd0-94eb-415d4abdb920-kube-api-access-jccnd" (OuterVolumeSpecName: "kube-api-access-jccnd") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "kube-api-access-jccnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.675620 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-push" (OuterVolumeSpecName: "builder-dockercfg-49c42-push") pod "1497f490-385e-4bd0-94eb-415d4abdb920" (UID: "1497f490-385e-4bd0-94eb-415d4abdb920"). InnerVolumeSpecName "builder-dockercfg-49c42-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.770929 5116 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.770970 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jccnd\" (UniqueName: \"kubernetes.io/projected/1497f490-385e-4bd0-94eb-415d4abdb920-kube-api-access-jccnd\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.770980 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.770999 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.771008 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/1497f490-385e-4bd0-94eb-415d4abdb920-builder-dockercfg-49c42-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.771017 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.771025 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:11 crc kubenswrapper[5116]: I1208 17:57:11.771033 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1497f490-385e-4bd0-94eb-415d4abdb920-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:12 crc kubenswrapper[5116]: I1208 17:57:12.101776 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_1497f490-385e-4bd0-94eb-415d4abdb920/git-clone/0.log" Dec 08 17:57:12 crc kubenswrapper[5116]: I1208 17:57:12.101942 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 08 17:57:12 crc kubenswrapper[5116]: I1208 17:57:12.101978 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"1497f490-385e-4bd0-94eb-415d4abdb920","Type":"ContainerDied","Data":"8a36963b8f8ce168926e05c9026d20c476d2e8f21069f50675e34ee47db339af"} Dec 08 17:57:12 crc kubenswrapper[5116]: I1208 17:57:12.102040 5116 scope.go:117] "RemoveContainer" containerID="2f89fbb24be1732e51878028400637f8ec564ae1624b51fbd9aa1f8a843d73eb" Dec 08 17:57:12 crc kubenswrapper[5116]: I1208 17:57:12.163229 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 17:57:12 crc kubenswrapper[5116]: I1208 17:57:12.169383 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 08 17:57:12 crc kubenswrapper[5116]: I1208 17:57:12.689130 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1497f490-385e-4bd0-94eb-415d4abdb920" path="/var/lib/kubelet/pods/1497f490-385e-4bd0-94eb-415d4abdb920/volumes" Dec 08 17:57:13 crc kubenswrapper[5116]: I1208 17:57:13.697559 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="aedcd631-92d4-43b5-bea3-b25e0687dcc5" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:57:13 crc kubenswrapper[5116]: {"timestamp": "2025-12-08T17:57:13+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:57:13 crc kubenswrapper[5116]: > Dec 08 17:57:15 crc kubenswrapper[5116]: I1208 17:57:15.732513 5116 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-86sn8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:57:15 crc kubenswrapper[5116]: I1208 17:57:15.732903 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-86sn8" podUID="71d475ea-b97a-489a-8c80-1a30614dccb5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 17:57:19 crc kubenswrapper[5116]: I1208 17:57:19.229192 5116 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="aedcd631-92d4-43b5-bea3-b25e0687dcc5" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:57:19 crc kubenswrapper[5116]: {"timestamp": "2025-12-08T17:57:19+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:57:19 crc kubenswrapper[5116]: > Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.712113 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.713081 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1497f490-385e-4bd0-94eb-415d4abdb920" containerName="git-clone" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.713096 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="1497f490-385e-4bd0-94eb-415d4abdb920" containerName="git-clone" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.713209 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="1497f490-385e-4bd0-94eb-415d4abdb920" containerName="git-clone" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.717401 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.720752 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.721208 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-ca\"" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.721451 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-sys-config\"" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.723033 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-49c42\"" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.723797 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-global-ca\"" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.890232 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892024 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892076 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892100 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892233 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892337 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892369 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892390 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892448 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892482 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892506 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892534 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdn4c\" (UniqueName: \"kubernetes.io/projected/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-kube-api-access-fdn4c\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.892584 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.900660 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994027 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994087 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994114 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994187 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdn4c\" (UniqueName: \"kubernetes.io/projected/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-kube-api-access-fdn4c\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994299 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994534 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994220 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994737 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994770 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.995821 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.996374 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.995218 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.994954 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.996330 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.995745 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.996697 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.996781 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.996852 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.997496 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.997565 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.996981 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:20 crc kubenswrapper[5116]: I1208 17:57:20.997087 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:21 crc kubenswrapper[5116]: I1208 17:57:21.001231 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:21 crc kubenswrapper[5116]: I1208 17:57:21.002898 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:21 crc kubenswrapper[5116]: I1208 17:57:21.003429 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:21 crc kubenswrapper[5116]: I1208 17:57:21.019397 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdn4c\" (UniqueName: \"kubernetes.io/projected/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-kube-api-access-fdn4c\") pod \"service-telemetry-framework-index-2-build\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:21 crc kubenswrapper[5116]: I1208 17:57:21.197326 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:21 crc kubenswrapper[5116]: I1208 17:57:21.753619 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 17:57:22 crc kubenswrapper[5116]: I1208 17:57:22.342546 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"8a97ed26-ba4b-45f4-a08c-32c1d714f61b","Type":"ContainerStarted","Data":"49a0942a1d79ae0d0928c214c44e653fcb37b820f9d2dc55d8c9694a77feb4c2"} Dec 08 17:57:23 crc kubenswrapper[5116]: I1208 17:57:23.351504 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"8a97ed26-ba4b-45f4-a08c-32c1d714f61b","Type":"ContainerStarted","Data":"2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9"} Dec 08 17:57:23 crc kubenswrapper[5116]: I1208 17:57:23.409595 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42366: no serving certificate available for the kubelet" Dec 08 17:57:24 crc kubenswrapper[5116]: I1208 17:57:24.440606 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 17:57:24 crc kubenswrapper[5116]: I1208 17:57:24.497346 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.362811 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="8a97ed26-ba4b-45f4-a08c-32c1d714f61b" containerName="git-clone" containerID="cri-o://2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9" gracePeriod=30 Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.797980 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_8a97ed26-ba4b-45f4-a08c-32c1d714f61b/git-clone/0.log" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.798327 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.816654 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-root\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.816948 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-ca-bundles\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817035 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817063 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildcachedir\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817188 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817321 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-proxy-ca-bundles\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817431 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdn4c\" (UniqueName: \"kubernetes.io/projected/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-kube-api-access-fdn4c\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817534 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-blob-cache\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817612 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817743 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-node-pullsecrets\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817858 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildworkdir\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817753 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817977 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.818018 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817955 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.817973 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-pull\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.818104 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-system-configs\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.818160 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-push\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.818372 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.818443 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-run\") pod \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\" (UID: \"8a97ed26-ba4b-45f4-a08c-32c1d714f61b\") " Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.818638 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.818879 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819033 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819091 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819101 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819111 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819120 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819129 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819137 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819146 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.819154 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.823800 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-push" (OuterVolumeSpecName: "builder-dockercfg-49c42-push") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "builder-dockercfg-49c42-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.825942 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-pull" (OuterVolumeSpecName: "builder-dockercfg-49c42-pull") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "builder-dockercfg-49c42-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.831581 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.832402 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-kube-api-access-fdn4c" (OuterVolumeSpecName: "kube-api-access-fdn4c") pod "8a97ed26-ba4b-45f4-a08c-32c1d714f61b" (UID: "8a97ed26-ba4b-45f4-a08c-32c1d714f61b"). InnerVolumeSpecName "kube-api-access-fdn4c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.920754 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdn4c\" (UniqueName: \"kubernetes.io/projected/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-kube-api-access-fdn4c\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.920802 5116 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.920813 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:25 crc kubenswrapper[5116]: I1208 17:57:25.920822 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/8a97ed26-ba4b-45f4-a08c-32c1d714f61b-builder-dockercfg-49c42-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.468853 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_8a97ed26-ba4b-45f4-a08c-32c1d714f61b/git-clone/0.log" Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.468927 5116 generic.go:358] "Generic (PLEG): container finished" podID="8a97ed26-ba4b-45f4-a08c-32c1d714f61b" containerID="2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9" exitCode=1 Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.469030 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.469043 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"8a97ed26-ba4b-45f4-a08c-32c1d714f61b","Type":"ContainerDied","Data":"2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9"} Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.469104 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"8a97ed26-ba4b-45f4-a08c-32c1d714f61b","Type":"ContainerDied","Data":"49a0942a1d79ae0d0928c214c44e653fcb37b820f9d2dc55d8c9694a77feb4c2"} Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.469128 5116 scope.go:117] "RemoveContainer" containerID="2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9" Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.496734 5116 scope.go:117] "RemoveContainer" containerID="2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9" Dec 08 17:57:26 crc kubenswrapper[5116]: E1208 17:57:26.499880 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9\": container with ID starting with 2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9 not found: ID does not exist" containerID="2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9" Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.499942 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9"} err="failed to get container status \"2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9\": rpc error: code = NotFound desc = could not find container \"2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9\": container with ID starting with 2765f6a9f78bbb325ff0e7d52f22d6ca4a94175dea598879efacd533bc5b34f9 not found: ID does not exist" Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.503922 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.516013 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 08 17:57:26 crc kubenswrapper[5116]: E1208 17:57:26.601026 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a97ed26_ba4b_45f4_a08c_32c1d714f61b.slice/crio-49a0942a1d79ae0d0928c214c44e653fcb37b820f9d2dc55d8c9694a77feb4c2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a97ed26_ba4b_45f4_a08c_32c1d714f61b.slice\": RecentStats: unable to find data in memory cache]" Dec 08 17:57:26 crc kubenswrapper[5116]: I1208 17:57:26.687711 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a97ed26-ba4b-45f4-a08c-32c1d714f61b" path="/var/lib/kubelet/pods/8a97ed26-ba4b-45f4-a08c-32c1d714f61b/volumes" Dec 08 17:57:35 crc kubenswrapper[5116]: I1208 17:57:35.888039 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 17:57:35 crc kubenswrapper[5116]: I1208 17:57:35.889449 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a97ed26-ba4b-45f4-a08c-32c1d714f61b" containerName="git-clone" Dec 08 17:57:35 crc kubenswrapper[5116]: I1208 17:57:35.889469 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a97ed26-ba4b-45f4-a08c-32c1d714f61b" containerName="git-clone" Dec 08 17:57:35 crc kubenswrapper[5116]: I1208 17:57:35.889637 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="8a97ed26-ba4b-45f4-a08c-32c1d714f61b" containerName="git-clone" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.531189 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.531377 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.533393 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-49c42\"" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.533419 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-sys-config\"" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.534290 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.534425 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-global-ca\"" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.534893 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-ca\"" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.603814 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.603866 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.603957 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.603996 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604049 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604091 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604106 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604173 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604280 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604343 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604366 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604465 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrdjw\" (UniqueName: \"kubernetes.io/projected/3afdb17c-2bf4-468f-aaab-28d01d59282c-kube-api-access-wrdjw\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.604505 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.705906 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.705988 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706020 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706082 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrdjw\" (UniqueName: \"kubernetes.io/projected/3afdb17c-2bf4-468f-aaab-28d01d59282c-kube-api-access-wrdjw\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706117 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706167 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706198 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706306 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706360 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706438 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706490 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706514 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.706545 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.708419 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.708645 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.709152 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.709296 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.709754 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.710100 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.710438 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.710737 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.711312 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.715844 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.726846 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.726999 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.729737 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrdjw\" (UniqueName: \"kubernetes.io/projected/3afdb17c-2bf4-468f-aaab-28d01d59282c-kube-api-access-wrdjw\") pod \"service-telemetry-framework-index-3-build\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:36 crc kubenswrapper[5116]: I1208 17:57:36.849845 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:37 crc kubenswrapper[5116]: I1208 17:57:37.122521 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 17:57:37 crc kubenswrapper[5116]: I1208 17:57:37.592995 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"3afdb17c-2bf4-468f-aaab-28d01d59282c","Type":"ContainerStarted","Data":"90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847"} Dec 08 17:57:37 crc kubenswrapper[5116]: I1208 17:57:37.593059 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"3afdb17c-2bf4-468f-aaab-28d01d59282c","Type":"ContainerStarted","Data":"a1064813edc0b17543b95449d46fe0259c992debec02f2981ace6b631b4ee217"} Dec 08 17:57:37 crc kubenswrapper[5116]: I1208 17:57:37.644128 5116 ???:1] "http: TLS handshake error from 192.168.126.11:45800: no serving certificate available for the kubelet" Dec 08 17:57:38 crc kubenswrapper[5116]: I1208 17:57:38.674996 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 17:57:39 crc kubenswrapper[5116]: I1208 17:57:39.608559 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="3afdb17c-2bf4-468f-aaab-28d01d59282c" containerName="git-clone" containerID="cri-o://90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847" gracePeriod=30 Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.013408 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_3afdb17c-2bf4-468f-aaab-28d01d59282c/git-clone/0.log" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.013557 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.054919 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-node-pullsecrets\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055052 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-blob-cache\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055106 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055154 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-pull\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055201 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-run\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055219 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-push\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055264 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-root\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055305 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-ca-bundles\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055324 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrdjw\" (UniqueName: \"kubernetes.io/projected/3afdb17c-2bf4-468f-aaab-28d01d59282c-kube-api-access-wrdjw\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055359 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055420 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-proxy-ca-bundles\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055462 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildcachedir\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055564 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-system-configs\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055670 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildworkdir\") pod \"3afdb17c-2bf4-468f-aaab-28d01d59282c\" (UID: \"3afdb17c-2bf4-468f-aaab-28d01d59282c\") " Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055715 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055730 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.055978 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.056053 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.056706 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.056743 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.056761 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3afdb17c-2bf4-468f-aaab-28d01d59282c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.056773 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.056758 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.056786 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.057220 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.057408 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.057536 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.062074 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-push" (OuterVolumeSpecName: "builder-dockercfg-49c42-push") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "builder-dockercfg-49c42-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.062215 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-pull" (OuterVolumeSpecName: "builder-dockercfg-49c42-pull") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "builder-dockercfg-49c42-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.063517 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afdb17c-2bf4-468f-aaab-28d01d59282c-kube-api-access-wrdjw" (OuterVolumeSpecName: "kube-api-access-wrdjw") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "kube-api-access-wrdjw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.064525 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "3afdb17c-2bf4-468f-aaab-28d01d59282c" (UID: "3afdb17c-2bf4-468f-aaab-28d01d59282c"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.157962 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.158261 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3afdb17c-2bf4-468f-aaab-28d01d59282c-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.158272 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.158285 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-builder-dockercfg-49c42-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.158294 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.158302 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrdjw\" (UniqueName: \"kubernetes.io/projected/3afdb17c-2bf4-468f-aaab-28d01d59282c-kube-api-access-wrdjw\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.158311 5116 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/3afdb17c-2bf4-468f-aaab-28d01d59282c-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.158322 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3afdb17c-2bf4-468f-aaab-28d01d59282c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.617309 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_3afdb17c-2bf4-468f-aaab-28d01d59282c/git-clone/0.log" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.617369 5116 generic.go:358] "Generic (PLEG): container finished" podID="3afdb17c-2bf4-468f-aaab-28d01d59282c" containerID="90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847" exitCode=1 Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.617437 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"3afdb17c-2bf4-468f-aaab-28d01d59282c","Type":"ContainerDied","Data":"90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847"} Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.617475 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"3afdb17c-2bf4-468f-aaab-28d01d59282c","Type":"ContainerDied","Data":"a1064813edc0b17543b95449d46fe0259c992debec02f2981ace6b631b4ee217"} Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.617502 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.617519 5116 scope.go:117] "RemoveContainer" containerID="90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.657104 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.658605 5116 scope.go:117] "RemoveContainer" containerID="90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847" Dec 08 17:57:40 crc kubenswrapper[5116]: E1208 17:57:40.659029 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847\": container with ID starting with 90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847 not found: ID does not exist" containerID="90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.659068 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847"} err="failed to get container status \"90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847\": rpc error: code = NotFound desc = could not find container \"90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847\": container with ID starting with 90731ea065357068b5f0ffb0c68a1c9700dec4ec38d28751db3ac31fc96e9847 not found: ID does not exist" Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.660717 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 08 17:57:40 crc kubenswrapper[5116]: I1208 17:57:40.689938 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3afdb17c-2bf4-468f-aaab-28d01d59282c" path="/var/lib/kubelet/pods/3afdb17c-2bf4-468f-aaab-28d01d59282c/volumes" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.128028 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.129175 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3afdb17c-2bf4-468f-aaab-28d01d59282c" containerName="git-clone" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.129188 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afdb17c-2bf4-468f-aaab-28d01d59282c" containerName="git-clone" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.129311 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="3afdb17c-2bf4-468f-aaab-28d01d59282c" containerName="git-clone" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.144902 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.148391 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-49c42\"" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.148456 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-global-ca\"" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.148755 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.148771 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-sys-config\"" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.148916 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.151155 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-ca\"" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.242835 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.242911 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.242954 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.242985 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9942x\" (UniqueName: \"kubernetes.io/projected/39860aea-277e-475d-b67b-a8112548f4e6-kube-api-access-9942x\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243012 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243030 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243048 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243064 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243093 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243137 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243173 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243201 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.243262 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.344831 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.344904 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.344941 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.344981 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345011 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9942x\" (UniqueName: \"kubernetes.io/projected/39860aea-277e-475d-b67b-a8112548f4e6-kube-api-access-9942x\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345041 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345069 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345123 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345157 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345192 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345237 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345291 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345318 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345600 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345761 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.345883 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.346323 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.346542 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.346621 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.346795 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.347936 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.348807 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.359232 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.359232 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.359233 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.364853 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9942x\" (UniqueName: \"kubernetes.io/projected/39860aea-277e-475d-b67b-a8112548f4e6-kube-api-access-9942x\") pod \"service-telemetry-framework-index-4-build\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.463231 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:50 crc kubenswrapper[5116]: I1208 17:57:50.887038 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 17:57:51 crc kubenswrapper[5116]: I1208 17:57:51.691067 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"39860aea-277e-475d-b67b-a8112548f4e6","Type":"ContainerStarted","Data":"362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729"} Dec 08 17:57:51 crc kubenswrapper[5116]: I1208 17:57:51.691426 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"39860aea-277e-475d-b67b-a8112548f4e6","Type":"ContainerStarted","Data":"0829ae85b5e9083f92a99468de3f25c0fcffb5b2aa87537c2bcfd715763d6b09"} Dec 08 17:57:51 crc kubenswrapper[5116]: I1208 17:57:51.744821 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59658: no serving certificate available for the kubelet" Dec 08 17:57:52 crc kubenswrapper[5116]: I1208 17:57:52.775591 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 17:57:53 crc kubenswrapper[5116]: I1208 17:57:53.704336 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="39860aea-277e-475d-b67b-a8112548f4e6" containerName="git-clone" containerID="cri-o://362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729" gracePeriod=30 Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.145843 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_39860aea-277e-475d-b67b-a8112548f4e6/git-clone/0.log" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.146127 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.209876 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-system-configs\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.209975 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9942x\" (UniqueName: \"kubernetes.io/projected/39860aea-277e-475d-b67b-a8112548f4e6-kube-api-access-9942x\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210016 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-buildcachedir\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210109 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-buildworkdir\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210147 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-build-blob-cache\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210177 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-run\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210229 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-ca-bundles\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210305 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-node-pullsecrets\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210341 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-push\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210386 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210411 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-pull\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210451 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-proxy-ca-bundles\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.210501 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-root\") pod \"39860aea-277e-475d-b67b-a8112548f4e6\" (UID: \"39860aea-277e-475d-b67b-a8112548f4e6\") " Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.211258 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.211956 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.213255 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.213391 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.213620 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.213752 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.214091 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.214147 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.214193 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.218197 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.218358 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-push" (OuterVolumeSpecName: "builder-dockercfg-49c42-push") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "builder-dockercfg-49c42-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.218352 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39860aea-277e-475d-b67b-a8112548f4e6-kube-api-access-9942x" (OuterVolumeSpecName: "kube-api-access-9942x") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "kube-api-access-9942x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.218612 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-pull" (OuterVolumeSpecName: "builder-dockercfg-49c42-pull") pod "39860aea-277e-475d-b67b-a8112548f4e6" (UID: "39860aea-277e-475d-b67b-a8112548f4e6"). InnerVolumeSpecName "builder-dockercfg-49c42-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312144 5116 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312195 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-push\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312208 5116 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312219 5116 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-49c42-pull\" (UniqueName: \"kubernetes.io/secret/39860aea-277e-475d-b67b-a8112548f4e6-builder-dockercfg-49c42-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312229 5116 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312237 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312263 5116 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312273 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9942x\" (UniqueName: \"kubernetes.io/projected/39860aea-277e-475d-b67b-a8112548f4e6-kube-api-access-9942x\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312281 5116 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39860aea-277e-475d-b67b-a8112548f4e6-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312288 5116 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312296 5116 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312304 5116 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39860aea-277e-475d-b67b-a8112548f4e6-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.312311 5116 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39860aea-277e-475d-b67b-a8112548f4e6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.384606 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-fm8b6"] Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.386913 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39860aea-277e-475d-b67b-a8112548f4e6" containerName="git-clone" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.386968 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="39860aea-277e-475d-b67b-a8112548f4e6" containerName="git-clone" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.387138 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="39860aea-277e-475d-b67b-a8112548f4e6" containerName="git-clone" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.414699 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fm8b6"] Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.414898 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fm8b6" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.417361 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-xzqvg\"" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.515712 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xqw\" (UniqueName: \"kubernetes.io/projected/a33e838a-5d2f-49dd-b574-56dd8ada668c-kube-api-access-n2xqw\") pod \"infrawatch-operators-fm8b6\" (UID: \"a33e838a-5d2f-49dd-b574-56dd8ada668c\") " pod="service-telemetry/infrawatch-operators-fm8b6" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.617623 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xqw\" (UniqueName: \"kubernetes.io/projected/a33e838a-5d2f-49dd-b574-56dd8ada668c-kube-api-access-n2xqw\") pod \"infrawatch-operators-fm8b6\" (UID: \"a33e838a-5d2f-49dd-b574-56dd8ada668c\") " pod="service-telemetry/infrawatch-operators-fm8b6" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.639476 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xqw\" (UniqueName: \"kubernetes.io/projected/a33e838a-5d2f-49dd-b574-56dd8ada668c-kube-api-access-n2xqw\") pod \"infrawatch-operators-fm8b6\" (UID: \"a33e838a-5d2f-49dd-b574-56dd8ada668c\") " pod="service-telemetry/infrawatch-operators-fm8b6" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.714670 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_39860aea-277e-475d-b67b-a8112548f4e6/git-clone/0.log" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.714726 5116 generic.go:358] "Generic (PLEG): container finished" podID="39860aea-277e-475d-b67b-a8112548f4e6" containerID="362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729" exitCode=1 Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.714840 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"39860aea-277e-475d-b67b-a8112548f4e6","Type":"ContainerDied","Data":"362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729"} Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.714852 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.714874 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"39860aea-277e-475d-b67b-a8112548f4e6","Type":"ContainerDied","Data":"0829ae85b5e9083f92a99468de3f25c0fcffb5b2aa87537c2bcfd715763d6b09"} Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.714899 5116 scope.go:117] "RemoveContainer" containerID="362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.739828 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.740937 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fm8b6" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.745429 5116 scope.go:117] "RemoveContainer" containerID="362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729" Dec 08 17:57:54 crc kubenswrapper[5116]: E1208 17:57:54.745874 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729\": container with ID starting with 362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729 not found: ID does not exist" containerID="362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.745921 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729"} err="failed to get container status \"362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729\": rpc error: code = NotFound desc = could not find container \"362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729\": container with ID starting with 362ce184dbaff6f78ea29257784daee395c74a5e87427f1635613b31d6f73729 not found: ID does not exist" Dec 08 17:57:54 crc kubenswrapper[5116]: I1208 17:57:54.746457 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 08 17:57:55 crc kubenswrapper[5116]: I1208 17:57:55.184534 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fm8b6"] Dec 08 17:57:55 crc kubenswrapper[5116]: W1208 17:57:55.190632 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda33e838a_5d2f_49dd_b574_56dd8ada668c.slice/crio-3f0490a8c1dd1771ff825ed697d242863c3f941e2a8a9212f6ef7c901b122766 WatchSource:0}: Error finding container 3f0490a8c1dd1771ff825ed697d242863c3f941e2a8a9212f6ef7c901b122766: Status 404 returned error can't find the container with id 3f0490a8c1dd1771ff825ed697d242863c3f941e2a8a9212f6ef7c901b122766 Dec 08 17:57:55 crc kubenswrapper[5116]: E1208 17:57:55.260694 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 17:57:55 crc kubenswrapper[5116]: E1208 17:57:55.261215 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n2xqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fm8b6_service-telemetry(a33e838a-5d2f-49dd-b574-56dd8ada668c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 17:57:55 crc kubenswrapper[5116]: E1208 17:57:55.262561 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fm8b6" podUID="a33e838a-5d2f-49dd-b574-56dd8ada668c" Dec 08 17:57:55 crc kubenswrapper[5116]: I1208 17:57:55.724226 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-fm8b6" event={"ID":"a33e838a-5d2f-49dd-b574-56dd8ada668c","Type":"ContainerStarted","Data":"3f0490a8c1dd1771ff825ed697d242863c3f941e2a8a9212f6ef7c901b122766"} Dec 08 17:57:55 crc kubenswrapper[5116]: E1208 17:57:55.725003 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fm8b6" podUID="a33e838a-5d2f-49dd-b574-56dd8ada668c" Dec 08 17:57:56 crc kubenswrapper[5116]: I1208 17:57:56.699432 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39860aea-277e-475d-b67b-a8112548f4e6" path="/var/lib/kubelet/pods/39860aea-277e-475d-b67b-a8112548f4e6/volumes" Dec 08 17:57:56 crc kubenswrapper[5116]: E1208 17:57:56.731055 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fm8b6" podUID="a33e838a-5d2f-49dd-b574-56dd8ada668c" Dec 08 17:57:59 crc kubenswrapper[5116]: I1208 17:57:59.573907 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-fm8b6"] Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.267921 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fm8b6" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.316456 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xqw\" (UniqueName: \"kubernetes.io/projected/a33e838a-5d2f-49dd-b574-56dd8ada668c-kube-api-access-n2xqw\") pod \"a33e838a-5d2f-49dd-b574-56dd8ada668c\" (UID: \"a33e838a-5d2f-49dd-b574-56dd8ada668c\") " Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.323946 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a33e838a-5d2f-49dd-b574-56dd8ada668c-kube-api-access-n2xqw" (OuterVolumeSpecName: "kube-api-access-n2xqw") pod "a33e838a-5d2f-49dd-b574-56dd8ada668c" (UID: "a33e838a-5d2f-49dd-b574-56dd8ada668c"). InnerVolumeSpecName "kube-api-access-n2xqw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.386782 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-fxj49"] Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.395802 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fxj49" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.395835 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fxj49"] Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.418026 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jws5h\" (UniqueName: \"kubernetes.io/projected/bb524cfa-b4aa-49e1-bd03-83dd9676a58c-kube-api-access-jws5h\") pod \"infrawatch-operators-fxj49\" (UID: \"bb524cfa-b4aa-49e1-bd03-83dd9676a58c\") " pod="service-telemetry/infrawatch-operators-fxj49" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.418204 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n2xqw\" (UniqueName: \"kubernetes.io/projected/a33e838a-5d2f-49dd-b574-56dd8ada668c-kube-api-access-n2xqw\") on node \"crc\" DevicePath \"\"" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.519044 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jws5h\" (UniqueName: \"kubernetes.io/projected/bb524cfa-b4aa-49e1-bd03-83dd9676a58c-kube-api-access-jws5h\") pod \"infrawatch-operators-fxj49\" (UID: \"bb524cfa-b4aa-49e1-bd03-83dd9676a58c\") " pod="service-telemetry/infrawatch-operators-fxj49" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.538319 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jws5h\" (UniqueName: \"kubernetes.io/projected/bb524cfa-b4aa-49e1-bd03-83dd9676a58c-kube-api-access-jws5h\") pod \"infrawatch-operators-fxj49\" (UID: \"bb524cfa-b4aa-49e1-bd03-83dd9676a58c\") " pod="service-telemetry/infrawatch-operators-fxj49" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.711913 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fxj49" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.757232 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fm8b6" Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.757277 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-fm8b6" event={"ID":"a33e838a-5d2f-49dd-b574-56dd8ada668c","Type":"ContainerDied","Data":"3f0490a8c1dd1771ff825ed697d242863c3f941e2a8a9212f6ef7c901b122766"} Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.802558 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-fm8b6"] Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.823345 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-fm8b6"] Dec 08 17:58:00 crc kubenswrapper[5116]: I1208 17:58:00.929283 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fxj49"] Dec 08 17:58:01 crc kubenswrapper[5116]: E1208 17:58:01.001919 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 17:58:01 crc kubenswrapper[5116]: E1208 17:58:01.002278 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 17:58:01 crc kubenswrapper[5116]: E1208 17:58:01.003540 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:58:01 crc kubenswrapper[5116]: I1208 17:58:01.776863 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-fxj49" event={"ID":"bb524cfa-b4aa-49e1-bd03-83dd9676a58c","Type":"ContainerStarted","Data":"05cad937a473eb961fff6f2f1ad4caec29fb1238578ef7fe5eaf80b8da88c02b"} Dec 08 17:58:01 crc kubenswrapper[5116]: E1208 17:58:01.780342 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:58:02 crc kubenswrapper[5116]: I1208 17:58:02.688763 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a33e838a-5d2f-49dd-b574-56dd8ada668c" path="/var/lib/kubelet/pods/a33e838a-5d2f-49dd-b574-56dd8ada668c/volumes" Dec 08 17:58:02 crc kubenswrapper[5116]: E1208 17:58:02.784638 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:58:14 crc kubenswrapper[5116]: E1208 17:58:14.761870 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 17:58:14 crc kubenswrapper[5116]: E1208 17:58:14.762754 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 17:58:14 crc kubenswrapper[5116]: E1208 17:58:14.764039 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:58:26 crc kubenswrapper[5116]: E1208 17:58:26.704853 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:58:33 crc kubenswrapper[5116]: I1208 17:58:33.334955 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:58:33 crc kubenswrapper[5116]: I1208 17:58:33.336438 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:58:38 crc kubenswrapper[5116]: E1208 17:58:38.745487 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 17:58:38 crc kubenswrapper[5116]: E1208 17:58:38.746345 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 17:58:38 crc kubenswrapper[5116]: E1208 17:58:38.748079 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:58:39 crc kubenswrapper[5116]: E1208 17:58:39.645207 5116 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 17:58:41 crc kubenswrapper[5116]: I1208 17:58:41.727215 5116 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:58:41 crc kubenswrapper[5116]: I1208 17:58:41.736373 5116 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:58:41 crc kubenswrapper[5116]: I1208 17:58:41.763902 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60064: no serving certificate available for the kubelet" Dec 08 17:58:41 crc kubenswrapper[5116]: I1208 17:58:41.792607 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60072: no serving certificate available for the kubelet" Dec 08 17:58:41 crc kubenswrapper[5116]: I1208 17:58:41.823769 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60088: no serving certificate available for the kubelet" Dec 08 17:58:41 crc kubenswrapper[5116]: I1208 17:58:41.864911 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60098: no serving certificate available for the kubelet" Dec 08 17:58:41 crc kubenswrapper[5116]: I1208 17:58:41.926567 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60110: no serving certificate available for the kubelet" Dec 08 17:58:42 crc kubenswrapper[5116]: I1208 17:58:42.034362 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60122: no serving certificate available for the kubelet" Dec 08 17:58:42 crc kubenswrapper[5116]: I1208 17:58:42.215198 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60132: no serving certificate available for the kubelet" Dec 08 17:58:42 crc kubenswrapper[5116]: I1208 17:58:42.556895 5116 ???:1] "http: TLS handshake error from 192.168.126.11:60142: no serving certificate available for the kubelet" Dec 08 17:58:43 crc kubenswrapper[5116]: I1208 17:58:43.220284 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35104: no serving certificate available for the kubelet" Dec 08 17:58:44 crc kubenswrapper[5116]: I1208 17:58:44.525131 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35114: no serving certificate available for the kubelet" Dec 08 17:58:47 crc kubenswrapper[5116]: I1208 17:58:47.109168 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35130: no serving certificate available for the kubelet" Dec 08 17:58:51 crc kubenswrapper[5116]: E1208 17:58:51.680522 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:58:52 crc kubenswrapper[5116]: I1208 17:58:52.251734 5116 ???:1] "http: TLS handshake error from 192.168.126.11:35146: no serving certificate available for the kubelet" Dec 08 17:59:02 crc kubenswrapper[5116]: I1208 17:59:02.525619 5116 ???:1] "http: TLS handshake error from 192.168.126.11:44704: no serving certificate available for the kubelet" Dec 08 17:59:02 crc kubenswrapper[5116]: I1208 17:59:02.691379 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:59:02 crc kubenswrapper[5116]: E1208 17:59:02.691660 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:59:03 crc kubenswrapper[5116]: I1208 17:59:03.334922 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:59:03 crc kubenswrapper[5116]: I1208 17:59:03.335077 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:59:13 crc kubenswrapper[5116]: E1208 17:59:13.681033 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:59:23 crc kubenswrapper[5116]: I1208 17:59:23.035315 5116 ???:1] "http: TLS handshake error from 192.168.126.11:55470: no serving certificate available for the kubelet" Dec 08 17:59:28 crc kubenswrapper[5116]: E1208 17:59:28.742569 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 17:59:28 crc kubenswrapper[5116]: E1208 17:59:28.743784 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 17:59:28 crc kubenswrapper[5116]: E1208 17:59:28.745016 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:59:33 crc kubenswrapper[5116]: I1208 17:59:33.336497 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:59:33 crc kubenswrapper[5116]: I1208 17:59:33.337720 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:59:33 crc kubenswrapper[5116]: I1208 17:59:33.337961 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 17:59:33 crc kubenswrapper[5116]: I1208 17:59:33.497456 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"268f636f479af3afc0e2d68841471d42bc01301ec891f325abd7818c7af95e59"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:59:33 crc kubenswrapper[5116]: I1208 17:59:33.497569 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://268f636f479af3afc0e2d68841471d42bc01301ec891f325abd7818c7af95e59" gracePeriod=600 Dec 08 17:59:34 crc kubenswrapper[5116]: I1208 17:59:34.524128 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="268f636f479af3afc0e2d68841471d42bc01301ec891f325abd7818c7af95e59" exitCode=0 Dec 08 17:59:34 crc kubenswrapper[5116]: I1208 17:59:34.524274 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"268f636f479af3afc0e2d68841471d42bc01301ec891f325abd7818c7af95e59"} Dec 08 17:59:34 crc kubenswrapper[5116]: I1208 17:59:34.525126 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"b453ed10c65aa7cc1240df68270146d64e9a2d735135be338c42a97ae15145ba"} Dec 08 17:59:34 crc kubenswrapper[5116]: I1208 17:59:34.525191 5116 scope.go:117] "RemoveContainer" containerID="a8c9ea73a3f3a6aeb913be43880595d7b2a74416932fa51f8351d035f08e4a16" Dec 08 17:59:43 crc kubenswrapper[5116]: E1208 17:59:43.681717 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 17:59:57 crc kubenswrapper[5116]: E1208 17:59:57.681149 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.169512 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v"] Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.201018 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v"] Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.201339 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.209124 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.211025 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.371027 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvsh\" (UniqueName: \"kubernetes.io/projected/27a52d09-4375-4801-961a-ddc050b80786-kube-api-access-pfvsh\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.371210 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a52d09-4375-4801-961a-ddc050b80786-config-volume\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.371270 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a52d09-4375-4801-961a-ddc050b80786-secret-volume\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.473054 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a52d09-4375-4801-961a-ddc050b80786-config-volume\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.473120 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a52d09-4375-4801-961a-ddc050b80786-secret-volume\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.473157 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pfvsh\" (UniqueName: \"kubernetes.io/projected/27a52d09-4375-4801-961a-ddc050b80786-kube-api-access-pfvsh\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.474531 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a52d09-4375-4801-961a-ddc050b80786-config-volume\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.481592 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a52d09-4375-4801-961a-ddc050b80786-secret-volume\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.494695 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfvsh\" (UniqueName: \"kubernetes.io/projected/27a52d09-4375-4801-961a-ddc050b80786-kube-api-access-pfvsh\") pod \"collect-profiles-29420280-v457v\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.539633 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:00 crc kubenswrapper[5116]: I1208 18:00:00.814348 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v"] Dec 08 18:00:01 crc kubenswrapper[5116]: I1208 18:00:01.717627 5116 generic.go:358] "Generic (PLEG): container finished" podID="27a52d09-4375-4801-961a-ddc050b80786" containerID="cf62c0a9f0a46fbb08717347c472d76ee71baf9e81a7d6877409a7ee917df83c" exitCode=0 Dec 08 18:00:01 crc kubenswrapper[5116]: I1208 18:00:01.717679 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" event={"ID":"27a52d09-4375-4801-961a-ddc050b80786","Type":"ContainerDied","Data":"cf62c0a9f0a46fbb08717347c472d76ee71baf9e81a7d6877409a7ee917df83c"} Dec 08 18:00:01 crc kubenswrapper[5116]: I1208 18:00:01.717722 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" event={"ID":"27a52d09-4375-4801-961a-ddc050b80786","Type":"ContainerStarted","Data":"3b5aff39c065a716af85a533861b950b41649eb23ab0f702e753d3dc1af1dfa7"} Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.060689 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.106639 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a52d09-4375-4801-961a-ddc050b80786-config-volume\") pod \"27a52d09-4375-4801-961a-ddc050b80786\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.106891 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a52d09-4375-4801-961a-ddc050b80786-secret-volume\") pod \"27a52d09-4375-4801-961a-ddc050b80786\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.106917 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfvsh\" (UniqueName: \"kubernetes.io/projected/27a52d09-4375-4801-961a-ddc050b80786-kube-api-access-pfvsh\") pod \"27a52d09-4375-4801-961a-ddc050b80786\" (UID: \"27a52d09-4375-4801-961a-ddc050b80786\") " Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.108597 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a52d09-4375-4801-961a-ddc050b80786-config-volume" (OuterVolumeSpecName: "config-volume") pod "27a52d09-4375-4801-961a-ddc050b80786" (UID: "27a52d09-4375-4801-961a-ddc050b80786"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.120549 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a52d09-4375-4801-961a-ddc050b80786-kube-api-access-pfvsh" (OuterVolumeSpecName: "kube-api-access-pfvsh") pod "27a52d09-4375-4801-961a-ddc050b80786" (UID: "27a52d09-4375-4801-961a-ddc050b80786"). InnerVolumeSpecName "kube-api-access-pfvsh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.120574 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a52d09-4375-4801-961a-ddc050b80786-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27a52d09-4375-4801-961a-ddc050b80786" (UID: "27a52d09-4375-4801-961a-ddc050b80786"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.208407 5116 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a52d09-4375-4801-961a-ddc050b80786-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.208458 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pfvsh\" (UniqueName: \"kubernetes.io/projected/27a52d09-4375-4801-961a-ddc050b80786-kube-api-access-pfvsh\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.208467 5116 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a52d09-4375-4801-961a-ddc050b80786-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.734458 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.734753 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-v457v" event={"ID":"27a52d09-4375-4801-961a-ddc050b80786","Type":"ContainerDied","Data":"3b5aff39c065a716af85a533861b950b41649eb23ab0f702e753d3dc1af1dfa7"} Dec 08 18:00:03 crc kubenswrapper[5116]: I1208 18:00:03.734862 5116 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5aff39c065a716af85a533861b950b41649eb23ab0f702e753d3dc1af1dfa7" Dec 08 18:00:04 crc kubenswrapper[5116]: I1208 18:00:04.110583 5116 ???:1] "http: TLS handshake error from 192.168.126.11:47522: no serving certificate available for the kubelet" Dec 08 18:00:12 crc kubenswrapper[5116]: E1208 18:00:12.681122 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:00:26 crc kubenswrapper[5116]: E1208 18:00:26.690204 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:00:38 crc kubenswrapper[5116]: E1208 18:00:38.681492 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:00:49 crc kubenswrapper[5116]: E1208 18:00:49.740871 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:00:49 crc kubenswrapper[5116]: E1208 18:00:49.743872 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:00:49 crc kubenswrapper[5116]: E1208 18:00:49.745213 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:01:00 crc kubenswrapper[5116]: E1208 18:01:00.680626 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:01:14 crc kubenswrapper[5116]: E1208 18:01:14.681405 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:01:26 crc kubenswrapper[5116]: I1208 18:01:26.061822 5116 ???:1] "http: TLS handshake error from 192.168.126.11:34322: no serving certificate available for the kubelet" Dec 08 18:01:27 crc kubenswrapper[5116]: E1208 18:01:27.681434 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:01:33 crc kubenswrapper[5116]: I1208 18:01:33.335209 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:01:33 crc kubenswrapper[5116]: I1208 18:01:33.335834 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:01:40 crc kubenswrapper[5116]: E1208 18:01:40.689703 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:01:52 crc kubenswrapper[5116]: E1208 18:01:52.680766 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:02:03 crc kubenswrapper[5116]: I1208 18:02:03.334780 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:02:03 crc kubenswrapper[5116]: I1208 18:02:03.335604 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:02:07 crc kubenswrapper[5116]: E1208 18:02:07.680800 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:02:11 crc kubenswrapper[5116]: I1208 18:02:11.115228 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 18:02:11 crc kubenswrapper[5116]: I1208 18:02:11.115909 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 18:02:11 crc kubenswrapper[5116]: I1208 18:02:11.142439 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 18:02:11 crc kubenswrapper[5116]: I1208 18:02:11.142532 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 18:02:11 crc kubenswrapper[5116]: I1208 18:02:11.153518 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:02:11 crc kubenswrapper[5116]: I1208 18:02:11.153628 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:02:18 crc kubenswrapper[5116]: E1208 18:02:18.680418 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:02:29 crc kubenswrapper[5116]: E1208 18:02:29.682212 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.335144 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.336020 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.336155 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.336937 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b453ed10c65aa7cc1240df68270146d64e9a2d735135be338c42a97ae15145ba"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.337026 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://b453ed10c65aa7cc1240df68270146d64e9a2d735135be338c42a97ae15145ba" gracePeriod=600 Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.752436 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="b453ed10c65aa7cc1240df68270146d64e9a2d735135be338c42a97ae15145ba" exitCode=0 Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.752600 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"b453ed10c65aa7cc1240df68270146d64e9a2d735135be338c42a97ae15145ba"} Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.753158 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"340d701cbfea2d8290edea08fe017592d17d2b3a6693505e44e77c36c8bb02a1"} Dec 08 18:02:33 crc kubenswrapper[5116]: I1208 18:02:33.753180 5116 scope.go:117] "RemoveContainer" containerID="268f636f479af3afc0e2d68841471d42bc01301ec891f325abd7818c7af95e59" Dec 08 18:02:44 crc kubenswrapper[5116]: E1208 18:02:44.681218 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:02:59 crc kubenswrapper[5116]: E1208 18:02:59.681735 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:03:09 crc kubenswrapper[5116]: I1208 18:03:09.251154 5116 ???:1] "http: TLS handshake error from 192.168.126.11:56602: no serving certificate available for the kubelet" Dec 08 18:03:13 crc kubenswrapper[5116]: E1208 18:03:13.680945 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:03:25 crc kubenswrapper[5116]: E1208 18:03:25.681875 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:03:37 crc kubenswrapper[5116]: E1208 18:03:37.748293 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:03:37 crc kubenswrapper[5116]: E1208 18:03:37.749147 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:03:37 crc kubenswrapper[5116]: E1208 18:03:37.750761 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:03:50 crc kubenswrapper[5116]: E1208 18:03:50.680547 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.747598 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-vjj7h"] Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.748332 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27a52d09-4375-4801-961a-ddc050b80786" containerName="collect-profiles" Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.748357 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a52d09-4375-4801-961a-ddc050b80786" containerName="collect-profiles" Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.748527 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="27a52d09-4375-4801-961a-ddc050b80786" containerName="collect-profiles" Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.758830 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vjj7h" Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.760787 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vjj7h"] Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.808716 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvt55\" (UniqueName: \"kubernetes.io/projected/496bc08d-961a-4732-b289-095c721f23ca-kube-api-access-lvt55\") pod \"infrawatch-operators-vjj7h\" (UID: \"496bc08d-961a-4732-b289-095c721f23ca\") " pod="service-telemetry/infrawatch-operators-vjj7h" Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.909906 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvt55\" (UniqueName: \"kubernetes.io/projected/496bc08d-961a-4732-b289-095c721f23ca-kube-api-access-lvt55\") pod \"infrawatch-operators-vjj7h\" (UID: \"496bc08d-961a-4732-b289-095c721f23ca\") " pod="service-telemetry/infrawatch-operators-vjj7h" Dec 08 18:03:50 crc kubenswrapper[5116]: I1208 18:03:50.934832 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvt55\" (UniqueName: \"kubernetes.io/projected/496bc08d-961a-4732-b289-095c721f23ca-kube-api-access-lvt55\") pod \"infrawatch-operators-vjj7h\" (UID: \"496bc08d-961a-4732-b289-095c721f23ca\") " pod="service-telemetry/infrawatch-operators-vjj7h" Dec 08 18:03:51 crc kubenswrapper[5116]: I1208 18:03:51.080070 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vjj7h" Dec 08 18:03:51 crc kubenswrapper[5116]: I1208 18:03:51.324923 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vjj7h"] Dec 08 18:03:51 crc kubenswrapper[5116]: I1208 18:03:51.388158 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vjj7h" event={"ID":"496bc08d-961a-4732-b289-095c721f23ca","Type":"ContainerStarted","Data":"ffbf14abddb3dfb33bca85425edbb6f745f020060665d90d9b6706d53ae78bc5"} Dec 08 18:03:51 crc kubenswrapper[5116]: E1208 18:03:51.397059 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:03:51 crc kubenswrapper[5116]: E1208 18:03:51.397372 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvt55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vjj7h_service-telemetry(496bc08d-961a-4732-b289-095c721f23ca): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:03:51 crc kubenswrapper[5116]: E1208 18:03:51.398614 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:03:52 crc kubenswrapper[5116]: E1208 18:03:52.396594 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:04:01 crc kubenswrapper[5116]: E1208 18:04:01.680696 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:04:04 crc kubenswrapper[5116]: I1208 18:04:04.681122 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 18:04:04 crc kubenswrapper[5116]: E1208 18:04:04.766039 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:04:04 crc kubenswrapper[5116]: E1208 18:04:04.766300 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvt55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vjj7h_service-telemetry(496bc08d-961a-4732-b289-095c721f23ca): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:04:04 crc kubenswrapper[5116]: E1208 18:04:04.767557 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:04:09 crc kubenswrapper[5116]: I1208 18:04:09.927952 5116 ???:1] "http: TLS handshake error from 192.168.126.11:58626: no serving certificate available for the kubelet" Dec 08 18:04:13 crc kubenswrapper[5116]: E1208 18:04:13.680124 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:04:15 crc kubenswrapper[5116]: E1208 18:04:15.680813 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:04:27 crc kubenswrapper[5116]: E1208 18:04:27.680908 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:04:30 crc kubenswrapper[5116]: E1208 18:04:30.751794 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:04:30 crc kubenswrapper[5116]: E1208 18:04:30.752332 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvt55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vjj7h_service-telemetry(496bc08d-961a-4732-b289-095c721f23ca): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:04:30 crc kubenswrapper[5116]: E1208 18:04:30.753562 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:04:33 crc kubenswrapper[5116]: I1208 18:04:33.335924 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:04:33 crc kubenswrapper[5116]: I1208 18:04:33.336355 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.590983 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nrp2j"] Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.605607 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrp2j"] Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.605803 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.701111 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-utilities\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.701278 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxlt\" (UniqueName: \"kubernetes.io/projected/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-kube-api-access-2rxlt\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.701322 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-catalog-content\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.814907 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-utilities\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.815002 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxlt\" (UniqueName: \"kubernetes.io/projected/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-kube-api-access-2rxlt\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.815030 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-catalog-content\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.815551 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-utilities\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.815727 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-catalog-content\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.841101 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxlt\" (UniqueName: \"kubernetes.io/projected/b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653-kube-api-access-2rxlt\") pod \"redhat-operators-nrp2j\" (UID: \"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653\") " pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:36 crc kubenswrapper[5116]: I1208 18:04:36.933129 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:37 crc kubenswrapper[5116]: I1208 18:04:37.247158 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrp2j"] Dec 08 18:04:37 crc kubenswrapper[5116]: W1208 18:04:37.258474 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3a4bf2d_e0a5_4cc2_a903_04d6f94cb653.slice/crio-6f941d86bb88e35915a5b8502cb3619d75e97e6ecc7276116f791e78f9668e55 WatchSource:0}: Error finding container 6f941d86bb88e35915a5b8502cb3619d75e97e6ecc7276116f791e78f9668e55: Status 404 returned error can't find the container with id 6f941d86bb88e35915a5b8502cb3619d75e97e6ecc7276116f791e78f9668e55 Dec 08 18:04:37 crc kubenswrapper[5116]: I1208 18:04:37.693105 5116 generic.go:358] "Generic (PLEG): container finished" podID="b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653" containerID="58c008f02cf203690a25d7e0a4975504edf46cb782946c351d6de14f74f4f47d" exitCode=0 Dec 08 18:04:37 crc kubenswrapper[5116]: I1208 18:04:37.693286 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrp2j" event={"ID":"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653","Type":"ContainerDied","Data":"58c008f02cf203690a25d7e0a4975504edf46cb782946c351d6de14f74f4f47d"} Dec 08 18:04:37 crc kubenswrapper[5116]: I1208 18:04:37.693371 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrp2j" event={"ID":"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653","Type":"ContainerStarted","Data":"6f941d86bb88e35915a5b8502cb3619d75e97e6ecc7276116f791e78f9668e55"} Dec 08 18:04:39 crc kubenswrapper[5116]: E1208 18:04:39.682573 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:04:45 crc kubenswrapper[5116]: E1208 18:04:45.680435 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:04:49 crc kubenswrapper[5116]: I1208 18:04:49.987302 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrp2j" event={"ID":"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653","Type":"ContainerStarted","Data":"e7a495bf1b9571f015e5d29a6ee028916d9b2aba2a3fae29a6e42825618f915b"} Dec 08 18:04:52 crc kubenswrapper[5116]: E1208 18:04:52.681398 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:04:54 crc kubenswrapper[5116]: I1208 18:04:54.176862 5116 generic.go:358] "Generic (PLEG): container finished" podID="b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653" containerID="e7a495bf1b9571f015e5d29a6ee028916d9b2aba2a3fae29a6e42825618f915b" exitCode=0 Dec 08 18:04:54 crc kubenswrapper[5116]: I1208 18:04:54.176954 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrp2j" event={"ID":"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653","Type":"ContainerDied","Data":"e7a495bf1b9571f015e5d29a6ee028916d9b2aba2a3fae29a6e42825618f915b"} Dec 08 18:04:55 crc kubenswrapper[5116]: I1208 18:04:55.185918 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrp2j" event={"ID":"b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653","Type":"ContainerStarted","Data":"669c9dd6fd27586fa19677ed6e8ed9f2466cd07fa3059e1bcc952424fca4def9"} Dec 08 18:04:55 crc kubenswrapper[5116]: I1208 18:04:55.204995 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nrp2j" podStartSLOduration=7.112422915 podStartE2EDuration="19.204974441s" podCreationTimestamp="2025-12-08 18:04:36 +0000 UTC" firstStartedPulling="2025-12-08 18:04:37.694526551 +0000 UTC m=+1347.491649785" lastFinishedPulling="2025-12-08 18:04:49.787078077 +0000 UTC m=+1359.584201311" observedRunningTime="2025-12-08 18:04:55.203440172 +0000 UTC m=+1365.000563406" watchObservedRunningTime="2025-12-08 18:04:55.204974441 +0000 UTC m=+1365.002097675" Dec 08 18:04:56 crc kubenswrapper[5116]: I1208 18:04:56.933375 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:56 crc kubenswrapper[5116]: I1208 18:04:56.933418 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:04:57 crc kubenswrapper[5116]: I1208 18:04:57.980583 5116 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nrp2j" podUID="b3a4bf2d-e0a5-4cc2-a903-04d6f94cb653" containerName="registry-server" probeResult="failure" output=< Dec 08 18:04:57 crc kubenswrapper[5116]: timeout: failed to connect service ":50051" within 1s Dec 08 18:04:57 crc kubenswrapper[5116]: > Dec 08 18:04:58 crc kubenswrapper[5116]: E1208 18:04:58.681144 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:05:03 crc kubenswrapper[5116]: I1208 18:05:03.335908 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:05:03 crc kubenswrapper[5116]: I1208 18:05:03.336228 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:05:05 crc kubenswrapper[5116]: E1208 18:05:05.679978 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:05:06 crc kubenswrapper[5116]: I1208 18:05:06.974380 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:05:07 crc kubenswrapper[5116]: I1208 18:05:07.018368 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nrp2j" Dec 08 18:05:07 crc kubenswrapper[5116]: I1208 18:05:07.629201 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrp2j"] Dec 08 18:05:07 crc kubenswrapper[5116]: I1208 18:05:07.794435 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m5vg7"] Dec 08 18:05:07 crc kubenswrapper[5116]: I1208 18:05:07.795729 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m5vg7" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="registry-server" containerID="cri-o://a52aec32a446d70666ef3b12eb9c92753f89ddff8f8f0c154511c79abffc1ac4" gracePeriod=2 Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.283809 5116 generic.go:358] "Generic (PLEG): container finished" podID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerID="a52aec32a446d70666ef3b12eb9c92753f89ddff8f8f0c154511c79abffc1ac4" exitCode=0 Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.283880 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vg7" event={"ID":"4c36a4dd-ab49-4395-a54d-452e884cbb78","Type":"ContainerDied","Data":"a52aec32a446d70666ef3b12eb9c92753f89ddff8f8f0c154511c79abffc1ac4"} Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.412495 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.491346 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbqs8\" (UniqueName: \"kubernetes.io/projected/4c36a4dd-ab49-4395-a54d-452e884cbb78-kube-api-access-hbqs8\") pod \"4c36a4dd-ab49-4395-a54d-452e884cbb78\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.491479 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-utilities\") pod \"4c36a4dd-ab49-4395-a54d-452e884cbb78\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.491602 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-catalog-content\") pod \"4c36a4dd-ab49-4395-a54d-452e884cbb78\" (UID: \"4c36a4dd-ab49-4395-a54d-452e884cbb78\") " Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.493318 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-utilities" (OuterVolumeSpecName: "utilities") pod "4c36a4dd-ab49-4395-a54d-452e884cbb78" (UID: "4c36a4dd-ab49-4395-a54d-452e884cbb78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.497116 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c36a4dd-ab49-4395-a54d-452e884cbb78-kube-api-access-hbqs8" (OuterVolumeSpecName: "kube-api-access-hbqs8") pod "4c36a4dd-ab49-4395-a54d-452e884cbb78" (UID: "4c36a4dd-ab49-4395-a54d-452e884cbb78"). InnerVolumeSpecName "kube-api-access-hbqs8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.594006 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.594040 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hbqs8\" (UniqueName: \"kubernetes.io/projected/4c36a4dd-ab49-4395-a54d-452e884cbb78-kube-api-access-hbqs8\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.599105 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c36a4dd-ab49-4395-a54d-452e884cbb78" (UID: "4c36a4dd-ab49-4395-a54d-452e884cbb78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:05:09 crc kubenswrapper[5116]: E1208 18:05:09.680912 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:05:09 crc kubenswrapper[5116]: I1208 18:05:09.695353 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c36a4dd-ab49-4395-a54d-452e884cbb78-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.293067 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5vg7" Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.293071 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5vg7" event={"ID":"4c36a4dd-ab49-4395-a54d-452e884cbb78","Type":"ContainerDied","Data":"7336da07d5e9d75f2199d5bb584dbb1de20cf72579b8d8a93796ca3ae65ddb9c"} Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.293204 5116 scope.go:117] "RemoveContainer" containerID="a52aec32a446d70666ef3b12eb9c92753f89ddff8f8f0c154511c79abffc1ac4" Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.312636 5116 scope.go:117] "RemoveContainer" containerID="f0e1bd00ed59310424b92be1d7efc40b464b4f405dc5efd847a7fcda96da605e" Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.331947 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m5vg7"] Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.335263 5116 scope.go:117] "RemoveContainer" containerID="729a65918e5042b7147b9d74c69556437c8b0eef64a26b2013ed2eb9ca3315f2" Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.336437 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m5vg7"] Dec 08 18:05:10 crc kubenswrapper[5116]: I1208 18:05:10.694715 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" path="/var/lib/kubelet/pods/4c36a4dd-ab49-4395-a54d-452e884cbb78/volumes" Dec 08 18:05:16 crc kubenswrapper[5116]: E1208 18:05:16.680591 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:05:20 crc kubenswrapper[5116]: E1208 18:05:20.771200 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:05:20 crc kubenswrapper[5116]: E1208 18:05:20.771773 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvt55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vjj7h_service-telemetry(496bc08d-961a-4732-b289-095c721f23ca): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:05:20 crc kubenswrapper[5116]: E1208 18:05:20.773190 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:05:28 crc kubenswrapper[5116]: E1208 18:05:28.681553 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.895307 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kcgqr"] Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.896687 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="extract-content" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.896725 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="extract-content" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.896745 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="extract-utilities" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.896754 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="extract-utilities" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.896805 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="registry-server" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.896814 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="registry-server" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.896953 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c36a4dd-ab49-4395-a54d-452e884cbb78" containerName="registry-server" Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.915474 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcgqr"] Dec 08 18:05:30 crc kubenswrapper[5116]: I1208 18:05:30.915679 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.003439 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-catalog-content\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.003517 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrnd\" (UniqueName: \"kubernetes.io/projected/e4121a2a-1675-4cbe-8d97-9f58e3124357-kube-api-access-jfrnd\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.003687 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-utilities\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.105316 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-catalog-content\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.105433 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrnd\" (UniqueName: \"kubernetes.io/projected/e4121a2a-1675-4cbe-8d97-9f58e3124357-kube-api-access-jfrnd\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.105475 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-utilities\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.105800 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-catalog-content\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.105904 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-utilities\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.134043 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrnd\" (UniqueName: \"kubernetes.io/projected/e4121a2a-1675-4cbe-8d97-9f58e3124357-kube-api-access-jfrnd\") pod \"community-operators-kcgqr\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.237409 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:31 crc kubenswrapper[5116]: I1208 18:05:31.677967 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcgqr"] Dec 08 18:05:31 crc kubenswrapper[5116]: W1208 18:05:31.689816 5116 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4121a2a_1675_4cbe_8d97_9f58e3124357.slice/crio-c28579a3f80f9f9dc8190be8aa8a1718a8631ec5fb9a69e69d4f64f4dd7f3768 WatchSource:0}: Error finding container c28579a3f80f9f9dc8190be8aa8a1718a8631ec5fb9a69e69d4f64f4dd7f3768: Status 404 returned error can't find the container with id c28579a3f80f9f9dc8190be8aa8a1718a8631ec5fb9a69e69d4f64f4dd7f3768 Dec 08 18:05:32 crc kubenswrapper[5116]: I1208 18:05:32.466669 5116 generic.go:358] "Generic (PLEG): container finished" podID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerID="597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca" exitCode=0 Dec 08 18:05:32 crc kubenswrapper[5116]: I1208 18:05:32.466737 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcgqr" event={"ID":"e4121a2a-1675-4cbe-8d97-9f58e3124357","Type":"ContainerDied","Data":"597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca"} Dec 08 18:05:32 crc kubenswrapper[5116]: I1208 18:05:32.466791 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcgqr" event={"ID":"e4121a2a-1675-4cbe-8d97-9f58e3124357","Type":"ContainerStarted","Data":"c28579a3f80f9f9dc8190be8aa8a1718a8631ec5fb9a69e69d4f64f4dd7f3768"} Dec 08 18:05:33 crc kubenswrapper[5116]: I1208 18:05:33.336190 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:05:33 crc kubenswrapper[5116]: I1208 18:05:33.336374 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:05:33 crc kubenswrapper[5116]: I1208 18:05:33.336443 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 18:05:33 crc kubenswrapper[5116]: I1208 18:05:33.337358 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"340d701cbfea2d8290edea08fe017592d17d2b3a6693505e44e77c36c8bb02a1"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:05:33 crc kubenswrapper[5116]: I1208 18:05:33.337446 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://340d701cbfea2d8290edea08fe017592d17d2b3a6693505e44e77c36c8bb02a1" gracePeriod=600 Dec 08 18:05:33 crc kubenswrapper[5116]: I1208 18:05:33.479564 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcgqr" event={"ID":"e4121a2a-1675-4cbe-8d97-9f58e3124357","Type":"ContainerStarted","Data":"c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55"} Dec 08 18:05:33 crc kubenswrapper[5116]: E1208 18:05:33.571662 5116 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4121a2a_1675_4cbe_8d97_9f58e3124357.slice/crio-c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55.scope\": RecentStats: unable to find data in memory cache]" Dec 08 18:05:33 crc kubenswrapper[5116]: E1208 18:05:33.680730 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:05:34 crc kubenswrapper[5116]: I1208 18:05:34.489711 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="340d701cbfea2d8290edea08fe017592d17d2b3a6693505e44e77c36c8bb02a1" exitCode=0 Dec 08 18:05:34 crc kubenswrapper[5116]: I1208 18:05:34.489792 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"340d701cbfea2d8290edea08fe017592d17d2b3a6693505e44e77c36c8bb02a1"} Dec 08 18:05:34 crc kubenswrapper[5116]: I1208 18:05:34.490400 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53"} Dec 08 18:05:34 crc kubenswrapper[5116]: I1208 18:05:34.490421 5116 scope.go:117] "RemoveContainer" containerID="b453ed10c65aa7cc1240df68270146d64e9a2d735135be338c42a97ae15145ba" Dec 08 18:05:34 crc kubenswrapper[5116]: I1208 18:05:34.495618 5116 generic.go:358] "Generic (PLEG): container finished" podID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerID="c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55" exitCode=0 Dec 08 18:05:34 crc kubenswrapper[5116]: I1208 18:05:34.495706 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcgqr" event={"ID":"e4121a2a-1675-4cbe-8d97-9f58e3124357","Type":"ContainerDied","Data":"c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55"} Dec 08 18:05:35 crc kubenswrapper[5116]: I1208 18:05:35.507706 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcgqr" event={"ID":"e4121a2a-1675-4cbe-8d97-9f58e3124357","Type":"ContainerStarted","Data":"4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea"} Dec 08 18:05:35 crc kubenswrapper[5116]: I1208 18:05:35.529260 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kcgqr" podStartSLOduration=4.997309994 podStartE2EDuration="5.529227842s" podCreationTimestamp="2025-12-08 18:05:30 +0000 UTC" firstStartedPulling="2025-12-08 18:05:32.467690846 +0000 UTC m=+1402.264814080" lastFinishedPulling="2025-12-08 18:05:32.999608694 +0000 UTC m=+1402.796731928" observedRunningTime="2025-12-08 18:05:35.525392435 +0000 UTC m=+1405.322515669" watchObservedRunningTime="2025-12-08 18:05:35.529227842 +0000 UTC m=+1405.326351076" Dec 08 18:05:41 crc kubenswrapper[5116]: I1208 18:05:41.238580 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:41 crc kubenswrapper[5116]: I1208 18:05:41.239271 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:41 crc kubenswrapper[5116]: I1208 18:05:41.281712 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:41 crc kubenswrapper[5116]: I1208 18:05:41.604112 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:41 crc kubenswrapper[5116]: I1208 18:05:41.662494 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcgqr"] Dec 08 18:05:41 crc kubenswrapper[5116]: E1208 18:05:41.680136 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:05:43 crc kubenswrapper[5116]: I1208 18:05:43.564219 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kcgqr" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="registry-server" containerID="cri-o://4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea" gracePeriod=2 Dec 08 18:05:43 crc kubenswrapper[5116]: I1208 18:05:43.980171 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.091890 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-catalog-content\") pod \"e4121a2a-1675-4cbe-8d97-9f58e3124357\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.091998 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-utilities\") pod \"e4121a2a-1675-4cbe-8d97-9f58e3124357\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.092458 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfrnd\" (UniqueName: \"kubernetes.io/projected/e4121a2a-1675-4cbe-8d97-9f58e3124357-kube-api-access-jfrnd\") pod \"e4121a2a-1675-4cbe-8d97-9f58e3124357\" (UID: \"e4121a2a-1675-4cbe-8d97-9f58e3124357\") " Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.094297 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-utilities" (OuterVolumeSpecName: "utilities") pod "e4121a2a-1675-4cbe-8d97-9f58e3124357" (UID: "e4121a2a-1675-4cbe-8d97-9f58e3124357"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.099184 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4121a2a-1675-4cbe-8d97-9f58e3124357-kube-api-access-jfrnd" (OuterVolumeSpecName: "kube-api-access-jfrnd") pod "e4121a2a-1675-4cbe-8d97-9f58e3124357" (UID: "e4121a2a-1675-4cbe-8d97-9f58e3124357"). InnerVolumeSpecName "kube-api-access-jfrnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.152351 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4121a2a-1675-4cbe-8d97-9f58e3124357" (UID: "e4121a2a-1675-4cbe-8d97-9f58e3124357"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.194168 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfrnd\" (UniqueName: \"kubernetes.io/projected/e4121a2a-1675-4cbe-8d97-9f58e3124357-kube-api-access-jfrnd\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.194213 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.194231 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4121a2a-1675-4cbe-8d97-9f58e3124357-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.575201 5116 generic.go:358] "Generic (PLEG): container finished" podID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerID="4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea" exitCode=0 Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.575294 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcgqr" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.575342 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcgqr" event={"ID":"e4121a2a-1675-4cbe-8d97-9f58e3124357","Type":"ContainerDied","Data":"4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea"} Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.575711 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcgqr" event={"ID":"e4121a2a-1675-4cbe-8d97-9f58e3124357","Type":"ContainerDied","Data":"c28579a3f80f9f9dc8190be8aa8a1718a8631ec5fb9a69e69d4f64f4dd7f3768"} Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.575736 5116 scope.go:117] "RemoveContainer" containerID="4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.597043 5116 scope.go:117] "RemoveContainer" containerID="c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.613553 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcgqr"] Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.619541 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kcgqr"] Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.630363 5116 scope.go:117] "RemoveContainer" containerID="597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.657834 5116 scope.go:117] "RemoveContainer" containerID="4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea" Dec 08 18:05:44 crc kubenswrapper[5116]: E1208 18:05:44.658605 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea\": container with ID starting with 4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea not found: ID does not exist" containerID="4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.658653 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea"} err="failed to get container status \"4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea\": rpc error: code = NotFound desc = could not find container \"4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea\": container with ID starting with 4ede12fdad20bbb0030046d5587d6efe3a1108e60879a1f7d5c0dd19acb247ea not found: ID does not exist" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.658679 5116 scope.go:117] "RemoveContainer" containerID="c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55" Dec 08 18:05:44 crc kubenswrapper[5116]: E1208 18:05:44.659312 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55\": container with ID starting with c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55 not found: ID does not exist" containerID="c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.659375 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55"} err="failed to get container status \"c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55\": rpc error: code = NotFound desc = could not find container \"c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55\": container with ID starting with c32a5f3b8ef054ac8376477c76bf2f696ff6914efa680822cbc30d6d6a176a55 not found: ID does not exist" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.659405 5116 scope.go:117] "RemoveContainer" containerID="597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca" Dec 08 18:05:44 crc kubenswrapper[5116]: E1208 18:05:44.659998 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca\": container with ID starting with 597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca not found: ID does not exist" containerID="597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.660040 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca"} err="failed to get container status \"597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca\": rpc error: code = NotFound desc = could not find container \"597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca\": container with ID starting with 597111e448b3fe201bc672d50a663278cfdae7a7763f1559ad03d637501397ca not found: ID does not exist" Dec 08 18:05:44 crc kubenswrapper[5116]: I1208 18:05:44.689345 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" path="/var/lib/kubelet/pods/e4121a2a-1675-4cbe-8d97-9f58e3124357/volumes" Dec 08 18:05:47 crc kubenswrapper[5116]: E1208 18:05:47.680573 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:05:55 crc kubenswrapper[5116]: E1208 18:05:55.680411 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:06:02 crc kubenswrapper[5116]: E1208 18:06:02.681509 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:06:06 crc kubenswrapper[5116]: E1208 18:06:06.681081 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:06:17 crc kubenswrapper[5116]: E1208 18:06:17.680907 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:06:21 crc kubenswrapper[5116]: E1208 18:06:21.680639 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:06:28 crc kubenswrapper[5116]: E1208 18:06:28.681654 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:06:35 crc kubenswrapper[5116]: E1208 18:06:35.680492 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:06:43 crc kubenswrapper[5116]: E1208 18:06:43.749789 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:06:43 crc kubenswrapper[5116]: E1208 18:06:43.750531 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvt55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vjj7h_service-telemetry(496bc08d-961a-4732-b289-095c721f23ca): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:06:43 crc kubenswrapper[5116]: E1208 18:06:43.751915 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.772926 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hfjqp"] Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.775216 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="extract-content" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.778439 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="extract-content" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.778645 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="registry-server" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.778662 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="registry-server" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.778738 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="extract-utilities" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.778748 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="extract-utilities" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.779122 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4121a2a-1675-4cbe-8d97-9f58e3124357" containerName="registry-server" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.850273 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hfjqp"] Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.850453 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.949958 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-utilities\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.950047 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-catalog-content\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:46 crc kubenswrapper[5116]: I1208 18:06:46.950116 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxq79\" (UniqueName: \"kubernetes.io/projected/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-kube-api-access-xxq79\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.050873 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-catalog-content\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.051284 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxq79\" (UniqueName: \"kubernetes.io/projected/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-kube-api-access-xxq79\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.051358 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-utilities\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.051619 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-catalog-content\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.052063 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-utilities\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.086347 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxq79\" (UniqueName: \"kubernetes.io/projected/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-kube-api-access-xxq79\") pod \"certified-operators-hfjqp\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.171878 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:47 crc kubenswrapper[5116]: I1208 18:06:47.409150 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hfjqp"] Dec 08 18:06:48 crc kubenswrapper[5116]: I1208 18:06:48.071744 5116 generic.go:358] "Generic (PLEG): container finished" podID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerID="bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a" exitCode=0 Dec 08 18:06:48 crc kubenswrapper[5116]: I1208 18:06:48.071885 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfjqp" event={"ID":"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7","Type":"ContainerDied","Data":"bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a"} Dec 08 18:06:48 crc kubenswrapper[5116]: I1208 18:06:48.071914 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfjqp" event={"ID":"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7","Type":"ContainerStarted","Data":"43bc3c4783a651b9f5981c245ce2fb18f580bf789cc020c6df70a1c9613e5b07"} Dec 08 18:06:49 crc kubenswrapper[5116]: I1208 18:06:49.085964 5116 generic.go:358] "Generic (PLEG): container finished" podID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerID="9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488" exitCode=0 Dec 08 18:06:49 crc kubenswrapper[5116]: I1208 18:06:49.086084 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfjqp" event={"ID":"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7","Type":"ContainerDied","Data":"9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488"} Dec 08 18:06:50 crc kubenswrapper[5116]: I1208 18:06:50.096347 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfjqp" event={"ID":"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7","Type":"ContainerStarted","Data":"55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e"} Dec 08 18:06:50 crc kubenswrapper[5116]: I1208 18:06:50.117459 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hfjqp" podStartSLOduration=3.556419187 podStartE2EDuration="4.117440465s" podCreationTimestamp="2025-12-08 18:06:46 +0000 UTC" firstStartedPulling="2025-12-08 18:06:48.072882618 +0000 UTC m=+1477.870005862" lastFinishedPulling="2025-12-08 18:06:48.633903886 +0000 UTC m=+1478.431027140" observedRunningTime="2025-12-08 18:06:50.115566188 +0000 UTC m=+1479.912689432" watchObservedRunningTime="2025-12-08 18:06:50.117440465 +0000 UTC m=+1479.914563699" Dec 08 18:06:50 crc kubenswrapper[5116]: E1208 18:06:50.690546 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:06:57 crc kubenswrapper[5116]: I1208 18:06:57.173003 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:57 crc kubenswrapper[5116]: I1208 18:06:57.173718 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:57 crc kubenswrapper[5116]: I1208 18:06:57.215978 5116 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:58 crc kubenswrapper[5116]: I1208 18:06:58.203007 5116 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:06:58 crc kubenswrapper[5116]: I1208 18:06:58.253842 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hfjqp"] Dec 08 18:06:58 crc kubenswrapper[5116]: E1208 18:06:58.681064 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:07:00 crc kubenswrapper[5116]: I1208 18:07:00.173803 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hfjqp" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="registry-server" containerID="cri-o://55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e" gracePeriod=2 Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.060237 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.085062 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxq79\" (UniqueName: \"kubernetes.io/projected/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-kube-api-access-xxq79\") pod \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.085157 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-catalog-content\") pod \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.085364 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-utilities\") pod \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\" (UID: \"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7\") " Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.086918 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-utilities" (OuterVolumeSpecName: "utilities") pod "5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" (UID: "5bb40a14-5fb8-4860-aaa4-b4430bd05bf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.092669 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-kube-api-access-xxq79" (OuterVolumeSpecName: "kube-api-access-xxq79") pod "5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" (UID: "5bb40a14-5fb8-4860-aaa4-b4430bd05bf7"). InnerVolumeSpecName "kube-api-access-xxq79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.118814 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" (UID: "5bb40a14-5fb8-4860-aaa4-b4430bd05bf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.181407 5116 generic.go:358] "Generic (PLEG): container finished" podID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerID="55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e" exitCode=0 Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.181513 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfjqp" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.181561 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfjqp" event={"ID":"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7","Type":"ContainerDied","Data":"55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e"} Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.181605 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfjqp" event={"ID":"5bb40a14-5fb8-4860-aaa4-b4430bd05bf7","Type":"ContainerDied","Data":"43bc3c4783a651b9f5981c245ce2fb18f580bf789cc020c6df70a1c9613e5b07"} Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.181628 5116 scope.go:117] "RemoveContainer" containerID="55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.186801 5116 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.186836 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxq79\" (UniqueName: \"kubernetes.io/projected/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-kube-api-access-xxq79\") on node \"crc\" DevicePath \"\"" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.186848 5116 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.202046 5116 scope.go:117] "RemoveContainer" containerID="9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.215681 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hfjqp"] Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.220806 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hfjqp"] Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.231951 5116 scope.go:117] "RemoveContainer" containerID="bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.251184 5116 scope.go:117] "RemoveContainer" containerID="55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e" Dec 08 18:07:01 crc kubenswrapper[5116]: E1208 18:07:01.251664 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e\": container with ID starting with 55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e not found: ID does not exist" containerID="55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.251710 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e"} err="failed to get container status \"55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e\": rpc error: code = NotFound desc = could not find container \"55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e\": container with ID starting with 55b274b5a9ff425bd0dc81f3ead86fa4368c0d7b32e4c546987534dda0695e6e not found: ID does not exist" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.251736 5116 scope.go:117] "RemoveContainer" containerID="9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488" Dec 08 18:07:01 crc kubenswrapper[5116]: E1208 18:07:01.252163 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488\": container with ID starting with 9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488 not found: ID does not exist" containerID="9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.252185 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488"} err="failed to get container status \"9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488\": rpc error: code = NotFound desc = could not find container \"9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488\": container with ID starting with 9774f2fa5d839c2b70343cd3fe7008228931aebd76dd5efb264037ef260e0488 not found: ID does not exist" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.252224 5116 scope.go:117] "RemoveContainer" containerID="bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a" Dec 08 18:07:01 crc kubenswrapper[5116]: E1208 18:07:01.252440 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a\": container with ID starting with bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a not found: ID does not exist" containerID="bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a" Dec 08 18:07:01 crc kubenswrapper[5116]: I1208 18:07:01.252462 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a"} err="failed to get container status \"bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a\": rpc error: code = NotFound desc = could not find container \"bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a\": container with ID starting with bde82cf312848fc8f370d3cdec8f909d44d375a4cd36d7779104f6fb77f0cd7a not found: ID does not exist" Dec 08 18:07:02 crc kubenswrapper[5116]: I1208 18:07:02.691395 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" path="/var/lib/kubelet/pods/5bb40a14-5fb8-4860-aaa4-b4430bd05bf7/volumes" Dec 08 18:07:05 crc kubenswrapper[5116]: E1208 18:07:05.680374 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:07:09 crc kubenswrapper[5116]: E1208 18:07:09.680455 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:07:11 crc kubenswrapper[5116]: I1208 18:07:11.185866 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 18:07:11 crc kubenswrapper[5116]: I1208 18:07:11.186166 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 18:07:11 crc kubenswrapper[5116]: I1208 18:07:11.209886 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 18:07:11 crc kubenswrapper[5116]: I1208 18:07:11.211773 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 18:07:11 crc kubenswrapper[5116]: I1208 18:07:11.220840 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:07:11 crc kubenswrapper[5116]: I1208 18:07:11.221150 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:07:17 crc kubenswrapper[5116]: E1208 18:07:17.681653 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:07:22 crc kubenswrapper[5116]: E1208 18:07:22.681502 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:07:31 crc kubenswrapper[5116]: E1208 18:07:31.680393 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:07:33 crc kubenswrapper[5116]: I1208 18:07:33.335538 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:07:33 crc kubenswrapper[5116]: I1208 18:07:33.335646 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:07:35 crc kubenswrapper[5116]: E1208 18:07:35.680200 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:07:43 crc kubenswrapper[5116]: E1208 18:07:43.680552 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:07:47 crc kubenswrapper[5116]: E1208 18:07:47.681044 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:07:58 crc kubenswrapper[5116]: E1208 18:07:58.680744 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:07:58 crc kubenswrapper[5116]: E1208 18:07:58.680788 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:08:03 crc kubenswrapper[5116]: I1208 18:08:03.335561 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:08:03 crc kubenswrapper[5116]: I1208 18:08:03.335924 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:08:09 crc kubenswrapper[5116]: E1208 18:08:09.681084 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:08:10 crc kubenswrapper[5116]: E1208 18:08:10.689305 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:08:24 crc kubenswrapper[5116]: E1208 18:08:24.681475 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:08:24 crc kubenswrapper[5116]: E1208 18:08:24.682500 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.335737 5116 patch_prober.go:28] interesting pod/machine-config-daemon-frh5r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.336564 5116 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.336645 5116 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.337447 5116 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53"} pod="openshift-machine-config-operator/machine-config-daemon-frh5r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.337619 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerName="machine-config-daemon" containerID="cri-o://7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" gracePeriod=600 Dec 08 18:08:33 crc kubenswrapper[5116]: E1208 18:08:33.489988 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.858067 5116 generic.go:358] "Generic (PLEG): container finished" podID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" exitCode=0 Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.858189 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerDied","Data":"7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53"} Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.858291 5116 scope.go:117] "RemoveContainer" containerID="340d701cbfea2d8290edea08fe017592d17d2b3a6693505e44e77c36c8bb02a1" Dec 08 18:08:33 crc kubenswrapper[5116]: I1208 18:08:33.858839 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:08:33 crc kubenswrapper[5116]: E1208 18:08:33.859394 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:08:38 crc kubenswrapper[5116]: E1208 18:08:38.767517 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:08:38 crc kubenswrapper[5116]: E1208 18:08:38.768559 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:08:38 crc kubenswrapper[5116]: E1208 18:08:38.769893 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:08:39 crc kubenswrapper[5116]: E1208 18:08:39.681129 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:08:45 crc kubenswrapper[5116]: I1208 18:08:45.680409 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:08:45 crc kubenswrapper[5116]: E1208 18:08:45.681735 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:08:51 crc kubenswrapper[5116]: E1208 18:08:51.681364 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:08:52 crc kubenswrapper[5116]: E1208 18:08:52.693050 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:08:57 crc kubenswrapper[5116]: I1208 18:08:57.680271 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:08:57 crc kubenswrapper[5116]: E1208 18:08:57.680857 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:09:04 crc kubenswrapper[5116]: I1208 18:09:04.691283 5116 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 18:09:04 crc kubenswrapper[5116]: E1208 18:09:04.694488 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:09:05 crc kubenswrapper[5116]: E1208 18:09:05.683554 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:09:11 crc kubenswrapper[5116]: I1208 18:09:11.680285 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:09:11 crc kubenswrapper[5116]: E1208 18:09:11.681226 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.071136 5116 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lj6rs/must-gather-922fc"] Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.072347 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="registry-server" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.072384 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="registry-server" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.072418 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="extract-utilities" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.072428 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="extract-utilities" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.072444 5116 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="extract-content" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.072454 5116 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="extract-content" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.072626 5116 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bb40a14-5fb8-4860-aaa4-b4430bd05bf7" containerName="registry-server" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.080461 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lj6rs/must-gather-922fc"] Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.080625 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.083204 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-lj6rs\"/\"kube-root-ca.crt\"" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.083570 5116 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-lj6rs\"/\"openshift-service-ca.crt\"" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.083753 5116 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-lj6rs\"/\"default-dockercfg-z8crc\"" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.257452 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2dw\" (UniqueName: \"kubernetes.io/projected/a98c2218-70d0-471b-9dda-5c2d2175b9a8-kube-api-access-tm2dw\") pod \"must-gather-922fc\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.257651 5116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a98c2218-70d0-471b-9dda-5c2d2175b9a8-must-gather-output\") pod \"must-gather-922fc\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.359470 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a98c2218-70d0-471b-9dda-5c2d2175b9a8-must-gather-output\") pod \"must-gather-922fc\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.359580 5116 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tm2dw\" (UniqueName: \"kubernetes.io/projected/a98c2218-70d0-471b-9dda-5c2d2175b9a8-kube-api-access-tm2dw\") pod \"must-gather-922fc\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.359938 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a98c2218-70d0-471b-9dda-5c2d2175b9a8-must-gather-output\") pod \"must-gather-922fc\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.384280 5116 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm2dw\" (UniqueName: \"kubernetes.io/projected/a98c2218-70d0-471b-9dda-5c2d2175b9a8-kube-api-access-tm2dw\") pod \"must-gather-922fc\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.396728 5116 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:09:13 crc kubenswrapper[5116]: I1208 18:09:13.648036 5116 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lj6rs/must-gather-922fc"] Dec 08 18:09:14 crc kubenswrapper[5116]: I1208 18:09:14.162563 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lj6rs/must-gather-922fc" event={"ID":"a98c2218-70d0-471b-9dda-5c2d2175b9a8","Type":"ContainerStarted","Data":"23741e2173c1733554faece9c1b3c1b22eafd994ad978b802017b15597ef8d83"} Dec 08 18:09:15 crc kubenswrapper[5116]: E1208 18:09:15.681008 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:09:18 crc kubenswrapper[5116]: E1208 18:09:18.681497 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:09:20 crc kubenswrapper[5116]: I1208 18:09:20.223538 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lj6rs/must-gather-922fc" event={"ID":"a98c2218-70d0-471b-9dda-5c2d2175b9a8","Type":"ContainerStarted","Data":"90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50"} Dec 08 18:09:21 crc kubenswrapper[5116]: I1208 18:09:21.231351 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lj6rs/must-gather-922fc" event={"ID":"a98c2218-70d0-471b-9dda-5c2d2175b9a8","Type":"ContainerStarted","Data":"9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc"} Dec 08 18:09:21 crc kubenswrapper[5116]: I1208 18:09:21.253387 5116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lj6rs/must-gather-922fc" podStartSLOduration=2.031786816 podStartE2EDuration="8.253366426s" podCreationTimestamp="2025-12-08 18:09:13 +0000 UTC" firstStartedPulling="2025-12-08 18:09:13.658473175 +0000 UTC m=+1623.455596409" lastFinishedPulling="2025-12-08 18:09:19.880052785 +0000 UTC m=+1629.677176019" observedRunningTime="2025-12-08 18:09:21.245639946 +0000 UTC m=+1631.042763190" watchObservedRunningTime="2025-12-08 18:09:21.253366426 +0000 UTC m=+1631.050489660" Dec 08 18:09:22 crc kubenswrapper[5116]: I1208 18:09:22.689330 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:09:22 crc kubenswrapper[5116]: E1208 18:09:22.689631 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:09:23 crc kubenswrapper[5116]: I1208 18:09:23.062609 5116 ???:1] "http: TLS handshake error from 192.168.126.11:36780: no serving certificate available for the kubelet" Dec 08 18:09:27 crc kubenswrapper[5116]: E1208 18:09:27.680790 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:09:29 crc kubenswrapper[5116]: E1208 18:09:29.737688 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:09:29 crc kubenswrapper[5116]: E1208 18:09:29.737912 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvt55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vjj7h_service-telemetry(496bc08d-961a-4732-b289-095c721f23ca): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:09:29 crc kubenswrapper[5116]: E1208 18:09:29.739788 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:09:37 crc kubenswrapper[5116]: I1208 18:09:37.639061 5116 ???:1] "http: TLS handshake error from 192.168.126.11:41442: no serving certificate available for the kubelet" Dec 08 18:09:37 crc kubenswrapper[5116]: I1208 18:09:37.680176 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:09:37 crc kubenswrapper[5116]: E1208 18:09:37.680703 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:09:42 crc kubenswrapper[5116]: E1208 18:09:42.680211 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:09:43 crc kubenswrapper[5116]: E1208 18:09:43.681496 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:09:48 crc kubenswrapper[5116]: I1208 18:09:48.680218 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:09:48 crc kubenswrapper[5116]: E1208 18:09:48.680907 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:09:56 crc kubenswrapper[5116]: E1208 18:09:56.680361 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:09:57 crc kubenswrapper[5116]: E1208 18:09:57.680779 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:10:01 crc kubenswrapper[5116]: I1208 18:10:01.680061 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:10:01 crc kubenswrapper[5116]: E1208 18:10:01.680821 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:10:01 crc kubenswrapper[5116]: I1208 18:10:01.752525 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42746: no serving certificate available for the kubelet" Dec 08 18:10:01 crc kubenswrapper[5116]: I1208 18:10:01.937472 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42752: no serving certificate available for the kubelet" Dec 08 18:10:01 crc kubenswrapper[5116]: I1208 18:10:01.954539 5116 ???:1] "http: TLS handshake error from 192.168.126.11:42756: no serving certificate available for the kubelet" Dec 08 18:10:09 crc kubenswrapper[5116]: E1208 18:10:09.680818 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:10:11 crc kubenswrapper[5116]: E1208 18:10:11.681305 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:10:13 crc kubenswrapper[5116]: I1208 18:10:13.680409 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:10:13 crc kubenswrapper[5116]: E1208 18:10:13.681136 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:10:14 crc kubenswrapper[5116]: I1208 18:10:14.257126 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50464: no serving certificate available for the kubelet" Dec 08 18:10:14 crc kubenswrapper[5116]: I1208 18:10:14.441622 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50472: no serving certificate available for the kubelet" Dec 08 18:10:14 crc kubenswrapper[5116]: I1208 18:10:14.510873 5116 ???:1] "http: TLS handshake error from 192.168.126.11:50486: no serving certificate available for the kubelet" Dec 08 18:10:22 crc kubenswrapper[5116]: E1208 18:10:22.680740 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:10:22 crc kubenswrapper[5116]: E1208 18:10:22.680963 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:10:24 crc kubenswrapper[5116]: I1208 18:10:24.680650 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:10:24 crc kubenswrapper[5116]: E1208 18:10:24.680962 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:10:29 crc kubenswrapper[5116]: I1208 18:10:29.575637 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52414: no serving certificate available for the kubelet" Dec 08 18:10:29 crc kubenswrapper[5116]: I1208 18:10:29.820430 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52426: no serving certificate available for the kubelet" Dec 08 18:10:29 crc kubenswrapper[5116]: I1208 18:10:29.825725 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52434: no serving certificate available for the kubelet" Dec 08 18:10:29 crc kubenswrapper[5116]: I1208 18:10:29.842849 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52446: no serving certificate available for the kubelet" Dec 08 18:10:29 crc kubenswrapper[5116]: I1208 18:10:29.982226 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52460: no serving certificate available for the kubelet" Dec 08 18:10:29 crc kubenswrapper[5116]: I1208 18:10:29.982785 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52466: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.023684 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52474: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.135813 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52488: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.386914 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52496: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.387602 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52504: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.409673 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52520: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.613694 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52530: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.630744 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52540: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.638028 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52550: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.767061 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52560: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.945057 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52566: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.958839 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52572: no serving certificate available for the kubelet" Dec 08 18:10:30 crc kubenswrapper[5116]: I1208 18:10:30.989884 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52574: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.171769 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52586: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.176282 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52600: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.197564 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52602: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.378658 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52616: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.521119 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52624: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.531944 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52628: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.554939 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52638: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.731724 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52650: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.739155 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52652: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.750215 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52664: no serving certificate available for the kubelet" Dec 08 18:10:31 crc kubenswrapper[5116]: I1208 18:10:31.912999 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52678: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.069908 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52690: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.087350 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52702: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.110235 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52706: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.258936 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52714: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.264627 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52722: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.296027 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52730: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.299061 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52738: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.464353 5116 ???:1] "http: TLS handshake error from 192.168.126.11:52740: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.607894 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59756: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.609456 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59772: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.630909 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59780: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.797438 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59788: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.805867 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59804: no serving certificate available for the kubelet" Dec 08 18:10:32 crc kubenswrapper[5116]: I1208 18:10:32.819788 5116 ???:1] "http: TLS handshake error from 192.168.126.11:59812: no serving certificate available for the kubelet" Dec 08 18:10:33 crc kubenswrapper[5116]: E1208 18:10:33.680410 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:10:37 crc kubenswrapper[5116]: E1208 18:10:37.681072 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:10:39 crc kubenswrapper[5116]: I1208 18:10:39.680708 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:10:39 crc kubenswrapper[5116]: E1208 18:10:39.681510 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:10:44 crc kubenswrapper[5116]: I1208 18:10:44.374859 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57572: no serving certificate available for the kubelet" Dec 08 18:10:44 crc kubenswrapper[5116]: I1208 18:10:44.530897 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57580: no serving certificate available for the kubelet" Dec 08 18:10:44 crc kubenswrapper[5116]: I1208 18:10:44.600471 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57592: no serving certificate available for the kubelet" Dec 08 18:10:44 crc kubenswrapper[5116]: I1208 18:10:44.741129 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57600: no serving certificate available for the kubelet" Dec 08 18:10:44 crc kubenswrapper[5116]: I1208 18:10:44.815174 5116 ???:1] "http: TLS handshake error from 192.168.126.11:57602: no serving certificate available for the kubelet" Dec 08 18:10:46 crc kubenswrapper[5116]: E1208 18:10:46.680570 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:10:51 crc kubenswrapper[5116]: E1208 18:10:51.680507 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:10:54 crc kubenswrapper[5116]: I1208 18:10:54.684882 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:10:54 crc kubenswrapper[5116]: E1208 18:10:54.685378 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:11:01 crc kubenswrapper[5116]: E1208 18:11:01.680681 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:11:05 crc kubenswrapper[5116]: E1208 18:11:05.680643 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:11:08 crc kubenswrapper[5116]: I1208 18:11:08.681196 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:11:08 crc kubenswrapper[5116]: E1208 18:11:08.682276 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:11:12 crc kubenswrapper[5116]: E1208 18:11:12.682279 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:11:16 crc kubenswrapper[5116]: E1208 18:11:16.682787 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:11:19 crc kubenswrapper[5116]: I1208 18:11:19.681290 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:11:19 crc kubenswrapper[5116]: E1208 18:11:19.681910 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:11:25 crc kubenswrapper[5116]: E1208 18:11:25.680638 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:11:26 crc kubenswrapper[5116]: I1208 18:11:26.135809 5116 generic.go:358] "Generic (PLEG): container finished" podID="a98c2218-70d0-471b-9dda-5c2d2175b9a8" containerID="90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50" exitCode=0 Dec 08 18:11:26 crc kubenswrapper[5116]: I1208 18:11:26.135882 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lj6rs/must-gather-922fc" event={"ID":"a98c2218-70d0-471b-9dda-5c2d2175b9a8","Type":"ContainerDied","Data":"90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50"} Dec 08 18:11:26 crc kubenswrapper[5116]: I1208 18:11:26.137295 5116 scope.go:117] "RemoveContainer" containerID="90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50" Dec 08 18:11:28 crc kubenswrapper[5116]: E1208 18:11:28.681412 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:11:32 crc kubenswrapper[5116]: I1208 18:11:32.681520 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:11:32 crc kubenswrapper[5116]: E1208 18:11:32.682000 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.081444 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39128: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.269664 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39132: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.280580 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39138: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.307086 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39154: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.316787 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39158: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.330483 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39174: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.340720 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39188: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.353887 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39194: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.363526 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39196: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.497674 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39198: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.508764 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39200: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.529666 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39202: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.540810 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39218: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.553787 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39230: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.563340 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39242: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.575363 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39256: no serving certificate available for the kubelet" Dec 08 18:11:33 crc kubenswrapper[5116]: I1208 18:11:33.586428 5116 ???:1] "http: TLS handshake error from 192.168.126.11:39260: no serving certificate available for the kubelet" Dec 08 18:11:38 crc kubenswrapper[5116]: I1208 18:11:38.621416 5116 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lj6rs/must-gather-922fc"] Dec 08 18:11:38 crc kubenswrapper[5116]: I1208 18:11:38.622986 5116 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-lj6rs/must-gather-922fc" podUID="a98c2218-70d0-471b-9dda-5c2d2175b9a8" containerName="copy" containerID="cri-o://9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc" gracePeriod=2 Dec 08 18:11:38 crc kubenswrapper[5116]: I1208 18:11:38.624711 5116 status_manager.go:895] "Failed to get status for pod" podUID="a98c2218-70d0-471b-9dda-5c2d2175b9a8" pod="openshift-must-gather-lj6rs/must-gather-922fc" err="pods \"must-gather-922fc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-lj6rs\": no relationship found between node 'crc' and this object" Dec 08 18:11:38 crc kubenswrapper[5116]: I1208 18:11:38.628572 5116 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lj6rs/must-gather-922fc"] Dec 08 18:11:38 crc kubenswrapper[5116]: E1208 18:11:38.680725 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.043764 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lj6rs_must-gather-922fc_a98c2218-70d0-471b-9dda-5c2d2175b9a8/copy/0.log" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.044790 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.100805 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a98c2218-70d0-471b-9dda-5c2d2175b9a8-must-gather-output\") pod \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.100857 5116 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm2dw\" (UniqueName: \"kubernetes.io/projected/a98c2218-70d0-471b-9dda-5c2d2175b9a8-kube-api-access-tm2dw\") pod \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\" (UID: \"a98c2218-70d0-471b-9dda-5c2d2175b9a8\") " Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.108375 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a98c2218-70d0-471b-9dda-5c2d2175b9a8-kube-api-access-tm2dw" (OuterVolumeSpecName: "kube-api-access-tm2dw") pod "a98c2218-70d0-471b-9dda-5c2d2175b9a8" (UID: "a98c2218-70d0-471b-9dda-5c2d2175b9a8"). InnerVolumeSpecName "kube-api-access-tm2dw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.145656 5116 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98c2218-70d0-471b-9dda-5c2d2175b9a8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a98c2218-70d0-471b-9dda-5c2d2175b9a8" (UID: "a98c2218-70d0-471b-9dda-5c2d2175b9a8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.202542 5116 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a98c2218-70d0-471b-9dda-5c2d2175b9a8-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.202579 5116 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tm2dw\" (UniqueName: \"kubernetes.io/projected/a98c2218-70d0-471b-9dda-5c2d2175b9a8-kube-api-access-tm2dw\") on node \"crc\" DevicePath \"\"" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.240563 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lj6rs_must-gather-922fc_a98c2218-70d0-471b-9dda-5c2d2175b9a8/copy/0.log" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.241049 5116 generic.go:358] "Generic (PLEG): container finished" podID="a98c2218-70d0-471b-9dda-5c2d2175b9a8" containerID="9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc" exitCode=143 Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.241117 5116 scope.go:117] "RemoveContainer" containerID="9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.241140 5116 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lj6rs/must-gather-922fc" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.263380 5116 scope.go:117] "RemoveContainer" containerID="90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.377171 5116 scope.go:117] "RemoveContainer" containerID="9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc" Dec 08 18:11:39 crc kubenswrapper[5116]: E1208 18:11:39.383534 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc\": container with ID starting with 9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc not found: ID does not exist" containerID="9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.383582 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc"} err="failed to get container status \"9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc\": rpc error: code = NotFound desc = could not find container \"9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc\": container with ID starting with 9593d56470373839933a426891f4ada885e6b3e8cac76e18bd70bbd6242934bc not found: ID does not exist" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.383617 5116 scope.go:117] "RemoveContainer" containerID="90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50" Dec 08 18:11:39 crc kubenswrapper[5116]: E1208 18:11:39.392378 5116 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50\": container with ID starting with 90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50 not found: ID does not exist" containerID="90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50" Dec 08 18:11:39 crc kubenswrapper[5116]: I1208 18:11:39.392429 5116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50"} err="failed to get container status \"90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50\": rpc error: code = NotFound desc = could not find container \"90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50\": container with ID starting with 90909cff8aa54751d9e2c50ca8865450f704de4a2f4ee828760da026054eea50 not found: ID does not exist" Dec 08 18:11:40 crc kubenswrapper[5116]: I1208 18:11:40.689773 5116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a98c2218-70d0-471b-9dda-5c2d2175b9a8" path="/var/lib/kubelet/pods/a98c2218-70d0-471b-9dda-5c2d2175b9a8/volumes" Dec 08 18:11:43 crc kubenswrapper[5116]: E1208 18:11:43.680111 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:11:44 crc kubenswrapper[5116]: I1208 18:11:44.680422 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:11:44 crc kubenswrapper[5116]: E1208 18:11:44.681061 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:11:50 crc kubenswrapper[5116]: E1208 18:11:50.680820 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:11:55 crc kubenswrapper[5116]: I1208 18:11:55.680366 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:11:55 crc kubenswrapper[5116]: E1208 18:11:55.680974 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:11:55 crc kubenswrapper[5116]: E1208 18:11:55.681200 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:12:05 crc kubenswrapper[5116]: E1208 18:12:05.681072 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:12:09 crc kubenswrapper[5116]: E1208 18:12:09.681955 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:12:10 crc kubenswrapper[5116]: I1208 18:12:10.690289 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:12:10 crc kubenswrapper[5116]: E1208 18:12:10.690717 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:12:11 crc kubenswrapper[5116]: I1208 18:12:11.260344 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 18:12:11 crc kubenswrapper[5116]: I1208 18:12:11.262045 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-766495d899-4wfjn_1e2dc1a4-c295-4f2b-b167-fdb40f1e6b0d/controller-manager/1.log" Dec 08 18:12:11 crc kubenswrapper[5116]: I1208 18:12:11.279091 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 18:12:11 crc kubenswrapper[5116]: I1208 18:12:11.279706 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8wqqf_84b46b92-c78c-44c8-a27b-4a20c47acd75/kube-multus/0.log" Dec 08 18:12:11 crc kubenswrapper[5116]: I1208 18:12:11.287658 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:12:11 crc kubenswrapper[5116]: I1208 18:12:11.289093 5116 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:12:20 crc kubenswrapper[5116]: E1208 18:12:20.686654 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:12:21 crc kubenswrapper[5116]: I1208 18:12:21.680867 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:12:21 crc kubenswrapper[5116]: E1208 18:12:21.681090 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:12:22 crc kubenswrapper[5116]: E1208 18:12:22.690760 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:12:32 crc kubenswrapper[5116]: E1208 18:12:32.680842 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:12:33 crc kubenswrapper[5116]: E1208 18:12:33.681300 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:12:36 crc kubenswrapper[5116]: I1208 18:12:36.680227 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:12:36 crc kubenswrapper[5116]: E1208 18:12:36.680998 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:12:46 crc kubenswrapper[5116]: E1208 18:12:46.680915 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:12:46 crc kubenswrapper[5116]: E1208 18:12:46.681191 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:12:48 crc kubenswrapper[5116]: I1208 18:12:48.681172 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:12:48 crc kubenswrapper[5116]: E1208 18:12:48.681714 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:12:57 crc kubenswrapper[5116]: E1208 18:12:57.680422 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:12:58 crc kubenswrapper[5116]: E1208 18:12:58.681352 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:13:02 crc kubenswrapper[5116]: I1208 18:13:02.680683 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:13:02 crc kubenswrapper[5116]: E1208 18:13:02.681292 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:13:10 crc kubenswrapper[5116]: E1208 18:13:10.692399 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:13:12 crc kubenswrapper[5116]: E1208 18:13:12.682849 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:13:16 crc kubenswrapper[5116]: I1208 18:13:16.681313 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:13:16 crc kubenswrapper[5116]: E1208 18:13:16.681796 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:13:22 crc kubenswrapper[5116]: E1208 18:13:22.685127 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:13:27 crc kubenswrapper[5116]: E1208 18:13:27.681340 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" Dec 08 18:13:29 crc kubenswrapper[5116]: I1208 18:13:29.687625 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:13:29 crc kubenswrapper[5116]: E1208 18:13:29.689158 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-frh5r_openshift-machine-config-operator(f2e88345-fa91-4bb3-bd9d-a89a8293bffe)\"" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" podUID="f2e88345-fa91-4bb3-bd9d-a89a8293bffe" Dec 08 18:13:37 crc kubenswrapper[5116]: E1208 18:13:37.681809 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vjj7h" podUID="496bc08d-961a-4732-b289-095c721f23ca" Dec 08 18:13:40 crc kubenswrapper[5116]: I1208 18:13:40.688755 5116 scope.go:117] "RemoveContainer" containerID="7359167b795a41ab380b781498dc80dbd062d704d3960c14f5b4c6518f6abe53" Dec 08 18:13:41 crc kubenswrapper[5116]: I1208 18:13:41.103957 5116 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-frh5r" event={"ID":"f2e88345-fa91-4bb3-bd9d-a89a8293bffe","Type":"ContainerStarted","Data":"b2471f3f62d6bcde7566db2b236477c821660bdfa85a29f26844ca4b3fbc1845"} Dec 08 18:13:41 crc kubenswrapper[5116]: E1208 18:13:41.733232 5116 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 18:13:41 crc kubenswrapper[5116]: E1208 18:13:41.752629 5116 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 08 18:13:41 crc kubenswrapper[5116]: E1208 18:13:41.752871 5116 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jws5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fxj49_service-telemetry(bb524cfa-b4aa-49e1-bd03-83dd9676a58c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 08 18:13:41 crc kubenswrapper[5116]: E1208 18:13:41.754078 5116 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fxj49" podUID="bb524cfa-b4aa-49e1-bd03-83dd9676a58c" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515115612535024451 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015115612535017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015115606364016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015115606364015464 5ustar corecore