Dec 12 15:19:31 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 15:19:31 crc kubenswrapper[5123]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:19:31 crc kubenswrapper[5123]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 15:19:31 crc kubenswrapper[5123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:19:31 crc kubenswrapper[5123]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:19:31 crc kubenswrapper[5123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 15:19:31 crc kubenswrapper[5123]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.421119 5123 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424678 5123 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424710 5123 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424718 5123 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424724 5123 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424729 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424733 5123 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424737 5123 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424741 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424746 5123 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424751 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424755 5123 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424760 5123 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424764 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424769 5123 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424773 5123 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424777 5123 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424782 5123 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424786 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424791 5123 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424795 5123 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424799 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424803 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424808 5123 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424812 5123 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424816 5123 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424820 5123 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424824 5123 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424828 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424832 5123 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424848 5123 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424861 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424867 5123 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424880 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424885 5123 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424898 5123 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424902 5123 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424906 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424911 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424915 5123 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424919 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424923 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424927 5123 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424931 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424937 5123 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424941 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424946 5123 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424952 5123 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424956 5123 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424960 5123 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424965 5123 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424970 5123 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424974 5123 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424979 5123 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424983 5123 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424987 5123 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424991 5123 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.424995 5123 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425000 5123 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425005 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425010 5123 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425014 5123 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425018 5123 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425023 5123 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425027 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425031 5123 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425035 5123 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425040 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425044 5123 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425048 5123 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425053 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425057 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425061 5123 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425066 5123 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425070 5123 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425089 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425093 5123 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425098 5123 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425116 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425121 5123 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425125 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425132 5123 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425137 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425142 5123 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425147 5123 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425151 5123 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425155 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425937 5123 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425950 5123 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425955 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425961 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425967 5123 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425971 5123 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425975 5123 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425980 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425985 5123 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425989 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425993 5123 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.425998 5123 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426002 5123 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426006 5123 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426010 5123 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426014 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426019 5123 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426023 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426029 5123 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426034 5123 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426053 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426057 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426062 5123 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426066 5123 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426071 5123 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426135 5123 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426140 5123 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426145 5123 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426149 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426154 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426158 5123 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426162 5123 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426166 5123 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426171 5123 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426177 5123 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426182 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426188 5123 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426193 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426199 5123 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426205 5123 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426209 5123 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426214 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426235 5123 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426240 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426245 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426249 5123 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426253 5123 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426258 5123 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426262 5123 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426266 5123 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426270 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426275 5123 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426289 5123 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426299 5123 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426304 5123 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426308 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426313 5123 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426317 5123 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426321 5123 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426326 5123 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426330 5123 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426334 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426338 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426343 5123 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426347 5123 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426352 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426356 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426360 5123 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426364 5123 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426370 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426376 5123 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426388 5123 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426392 5123 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426396 5123 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426401 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426405 5123 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426409 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426413 5123 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426417 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426422 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426427 5123 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426431 5123 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426435 5123 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426439 5123 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426445 5123 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.426449 5123 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428404 5123 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428427 5123 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428439 5123 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428447 5123 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428457 5123 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428463 5123 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428471 5123 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428480 5123 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428486 5123 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428491 5123 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428499 5123 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428505 5123 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428511 5123 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428516 5123 flags.go:64] FLAG: --cgroup-root="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428522 5123 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428527 5123 flags.go:64] FLAG: --client-ca-file="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428535 5123 flags.go:64] FLAG: --cloud-config="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428541 5123 flags.go:64] FLAG: --cloud-provider="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428547 5123 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428556 5123 flags.go:64] FLAG: --cluster-domain="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428561 5123 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428567 5123 flags.go:64] FLAG: --config-dir="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428573 5123 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428579 5123 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428586 5123 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428592 5123 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428597 5123 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428602 5123 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428607 5123 flags.go:64] FLAG: --contention-profiling="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428612 5123 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428618 5123 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428623 5123 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428628 5123 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428635 5123 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428641 5123 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428646 5123 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428650 5123 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428655 5123 flags.go:64] FLAG: --enable-server="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428660 5123 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428667 5123 flags.go:64] FLAG: --event-burst="100" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428672 5123 flags.go:64] FLAG: --event-qps="50" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428677 5123 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428682 5123 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428687 5123 flags.go:64] FLAG: --eviction-hard="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428694 5123 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428699 5123 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428704 5123 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428709 5123 flags.go:64] FLAG: --eviction-soft="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428714 5123 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428721 5123 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428727 5123 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428732 5123 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428736 5123 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428741 5123 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428745 5123 flags.go:64] FLAG: --feature-gates="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428752 5123 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428759 5123 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428765 5123 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428770 5123 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428775 5123 flags.go:64] FLAG: --healthz-port="10248" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428780 5123 flags.go:64] FLAG: --help="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428785 5123 flags.go:64] FLAG: --hostname-override="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428804 5123 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428814 5123 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428819 5123 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428824 5123 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428829 5123 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428834 5123 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428839 5123 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428844 5123 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428849 5123 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428855 5123 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428860 5123 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428865 5123 flags.go:64] FLAG: --kube-reserved="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428870 5123 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428875 5123 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428880 5123 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428886 5123 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428891 5123 flags.go:64] FLAG: --lock-file="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428896 5123 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428901 5123 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428906 5123 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428922 5123 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428927 5123 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428932 5123 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428937 5123 flags.go:64] FLAG: --logging-format="text" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428942 5123 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428949 5123 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428953 5123 flags.go:64] FLAG: --manifest-url="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428958 5123 flags.go:64] FLAG: --manifest-url-header="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428967 5123 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428972 5123 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428980 5123 flags.go:64] FLAG: --max-pods="110" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428986 5123 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428991 5123 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.428996 5123 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429001 5123 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429007 5123 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429012 5123 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429018 5123 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429033 5123 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429038 5123 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429043 5123 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429049 5123 flags.go:64] FLAG: --pod-cidr="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429054 5123 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429065 5123 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429070 5123 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429075 5123 flags.go:64] FLAG: --pods-per-core="0" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429080 5123 flags.go:64] FLAG: --port="10250" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429085 5123 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429090 5123 flags.go:64] FLAG: --provider-id="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429096 5123 flags.go:64] FLAG: --qos-reserved="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429101 5123 flags.go:64] FLAG: --read-only-port="10255" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429106 5123 flags.go:64] FLAG: --register-node="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429111 5123 flags.go:64] FLAG: --register-schedulable="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429117 5123 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429128 5123 flags.go:64] FLAG: --registry-burst="10" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429133 5123 flags.go:64] FLAG: --registry-qps="5" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429138 5123 flags.go:64] FLAG: --reserved-cpus="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429143 5123 flags.go:64] FLAG: --reserved-memory="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429149 5123 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429154 5123 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429159 5123 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429165 5123 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429170 5123 flags.go:64] FLAG: --runonce="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429175 5123 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429180 5123 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429186 5123 flags.go:64] FLAG: --seccomp-default="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429191 5123 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429196 5123 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429201 5123 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429207 5123 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429237 5123 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429244 5123 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429249 5123 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429254 5123 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429259 5123 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429265 5123 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429270 5123 flags.go:64] FLAG: --system-cgroups="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429275 5123 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429285 5123 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429290 5123 flags.go:64] FLAG: --tls-cert-file="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429295 5123 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429326 5123 flags.go:64] FLAG: --tls-min-version="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429332 5123 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429337 5123 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429342 5123 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429348 5123 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429354 5123 flags.go:64] FLAG: --v="2" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429363 5123 flags.go:64] FLAG: --version="false" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429370 5123 flags.go:64] FLAG: --vmodule="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429379 5123 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.429385 5123 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429555 5123 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429565 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429571 5123 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429577 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429589 5123 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429594 5123 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429598 5123 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429603 5123 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429608 5123 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429613 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429617 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429624 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429629 5123 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429634 5123 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429638 5123 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429643 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429647 5123 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429652 5123 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429656 5123 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429661 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429665 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429669 5123 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429674 5123 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429678 5123 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429682 5123 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429687 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429691 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429696 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429701 5123 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429706 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429711 5123 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429715 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429719 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429724 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429729 5123 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429733 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429741 5123 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429746 5123 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429751 5123 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429756 5123 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429760 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429764 5123 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429769 5123 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429788 5123 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429792 5123 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429797 5123 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429801 5123 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429805 5123 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429809 5123 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429813 5123 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429817 5123 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429822 5123 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429826 5123 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429831 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429835 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429839 5123 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429843 5123 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429847 5123 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429851 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429856 5123 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429860 5123 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429867 5123 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429871 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429876 5123 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429881 5123 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429885 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429889 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429893 5123 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429903 5123 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429911 5123 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429921 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429929 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429933 5123 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429937 5123 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429943 5123 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429952 5123 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429957 5123 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429962 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429966 5123 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429970 5123 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429975 5123 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429979 5123 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429983 5123 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429987 5123 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429991 5123 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.429996 5123 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.430013 5123 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.443807 5123 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.443877 5123 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.443980 5123 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.443991 5123 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.443996 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.443999 5123 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444005 5123 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444013 5123 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444017 5123 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444021 5123 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444025 5123 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444029 5123 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444033 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444037 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444042 5123 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444046 5123 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444049 5123 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444053 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444057 5123 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444061 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444066 5123 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444071 5123 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444075 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444079 5123 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444085 5123 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444089 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444092 5123 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444096 5123 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444100 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444108 5123 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444112 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444140 5123 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444144 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444154 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444158 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444162 5123 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444166 5123 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444171 5123 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444174 5123 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444179 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444183 5123 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444187 5123 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444191 5123 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444194 5123 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444198 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444202 5123 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444217 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444240 5123 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444244 5123 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444248 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444252 5123 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444256 5123 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444261 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444265 5123 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444269 5123 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444274 5123 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444278 5123 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444282 5123 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444286 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444290 5123 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444295 5123 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444299 5123 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444303 5123 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444306 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444310 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444324 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444333 5123 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444338 5123 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444342 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444347 5123 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444351 5123 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444355 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444359 5123 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444363 5123 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444368 5123 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444373 5123 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444377 5123 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444381 5123 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444387 5123 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444392 5123 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444396 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444400 5123 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444407 5123 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444411 5123 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444415 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444418 5123 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444422 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444426 5123 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.444434 5123 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444583 5123 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444591 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444596 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444602 5123 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444606 5123 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444610 5123 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444615 5123 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444619 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444623 5123 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444628 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444632 5123 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444636 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444641 5123 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444645 5123 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444649 5123 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444653 5123 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444657 5123 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444661 5123 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444665 5123 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444669 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444673 5123 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444677 5123 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444682 5123 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444687 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444691 5123 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444696 5123 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444700 5123 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444704 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444708 5123 feature_gate.go:328] unrecognized feature gate: Example Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444712 5123 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444715 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444719 5123 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444724 5123 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444727 5123 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444731 5123 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444735 5123 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444739 5123 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444743 5123 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444746 5123 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444751 5123 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444755 5123 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444759 5123 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444762 5123 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444767 5123 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444771 5123 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444774 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444778 5123 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444782 5123 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444786 5123 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444790 5123 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444793 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444797 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444801 5123 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444805 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444809 5123 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444817 5123 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444821 5123 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444827 5123 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444831 5123 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444835 5123 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444839 5123 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444843 5123 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444847 5123 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444851 5123 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444854 5123 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444858 5123 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444862 5123 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444866 5123 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444870 5123 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444873 5123 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444877 5123 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444883 5123 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444888 5123 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444892 5123 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444896 5123 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444900 5123 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444904 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444908 5123 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444912 5123 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444917 5123 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444921 5123 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444925 5123 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444929 5123 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444932 5123 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444936 5123 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 15:19:31 crc kubenswrapper[5123]: W1212 15:19:31.444940 5123 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.444948 5123 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.445519 5123 server.go:962] "Client rotation is on, will bootstrap in background" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.448431 5123 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.451994 5123 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.452171 5123 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.452870 5123 server.go:1019] "Starting client certificate rotation" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.453136 5123 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.453234 5123 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.463549 5123 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.466496 5123 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.466945 5123 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.477856 5123 log.go:25] "Validated CRI v1 runtime API" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.503063 5123 log.go:25] "Validated CRI v1 image API" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.505555 5123 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.508756 5123 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-12-15-13-21-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.508802 5123 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.533930 5123 manager.go:217] Machine: {Timestamp:2025-12-12 15:19:31.532452786 +0000 UTC m=+0.342405307 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:3aaed2a9-d1af-4a24-a65e-046edb5e804c BootID:17e3227e-03aa-4fce-8c3b-5ddc14058574 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4a:86:36 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4a:86:36 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:5a:eb:de Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c9:09:4c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:30:f6:73 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:63:a2:05 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:3a:76:05:d8:f6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:86:89:66:1c:c0:7c Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.534161 5123 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.534478 5123 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.535634 5123 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.535684 5123 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.535888 5123 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.535898 5123 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.535923 5123 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.535948 5123 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.536302 5123 state_mem.go:36] "Initialized new in-memory state store" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.536477 5123 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.536964 5123 kubelet.go:491] "Attempting to sync node with API server" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.536993 5123 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.537013 5123 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.537025 5123 kubelet.go:397] "Adding apiserver pod source" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.537044 5123 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.539074 5123 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.539104 5123 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.540468 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.540471 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.540575 5123 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.540589 5123 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.542630 5123 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.543122 5123 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.543528 5123 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544195 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544249 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544263 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544286 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544298 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544307 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544357 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544371 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544383 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544394 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544424 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544624 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544870 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.544885 5123 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.546047 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.562919 5123 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.563056 5123 server.go:1295] "Started kubelet" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.563281 5123 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 15:19:31 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.566850 5123 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.568461 5123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.568451 5123 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.569891 5123 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.569919 5123 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.570281 5123 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.569885 5123 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.570726 5123 server.go:317] "Adding debug handlers to kubelet server" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.571701 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.571695 5123 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.574100 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.574186 5123 factory.go:55] Registering systemd factory Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.574239 5123 factory.go:223] Registration of the systemd container factory successfully Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.574627 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.574870 5123 factory.go:153] Registering CRI-O factory Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.574910 5123 factory.go:223] Registration of the crio container factory successfully Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.575025 5123 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.575094 5123 factory.go:103] Registering Raw factory Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.574613 5123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188080e43538041e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.562996766 +0000 UTC m=+0.372949277,LastTimestamp:2025-12-12 15:19:31.562996766 +0000 UTC m=+0.372949277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.575135 5123 manager.go:1196] Started watching for new ooms in manager Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.575901 5123 manager.go:319] Starting recovery of all containers Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609196 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609274 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609286 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609295 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609303 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609311 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609331 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609341 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609351 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609360 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609383 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609391 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609401 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609410 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609421 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609429 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609438 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609449 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609532 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609541 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609549 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609558 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609567 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609575 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609584 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609592 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609620 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609629 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609640 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609648 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609656 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609665 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609678 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609693 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609706 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609718 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609726 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609735 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609744 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609760 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609769 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609777 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609787 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609796 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609805 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609814 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609825 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609839 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609851 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609861 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609870 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609879 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609890 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609908 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609921 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609933 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609951 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609961 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609972 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609984 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.609996 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610006 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610017 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610116 5123 manager.go:324] Recovery completed Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610416 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610437 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610446 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610457 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610465 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610474 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610482 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610492 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610501 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610509 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610517 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610527 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610536 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610543 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610552 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610560 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610567 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610576 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610584 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610592 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610602 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610618 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610635 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610669 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610681 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610690 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610698 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610707 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610715 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610724 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610734 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610742 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610750 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610758 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610766 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610774 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610782 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610790 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610812 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610825 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610837 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610851 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610863 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610872 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610880 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610890 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610899 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610908 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610930 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610970 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610981 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.610990 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611005 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611017 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611025 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611034 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611651 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611735 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611756 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611777 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611795 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611812 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611831 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611848 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611864 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611884 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611903 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611921 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611945 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611962 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611979 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.611999 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612031 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612049 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612064 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612081 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612098 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612113 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612131 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612148 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612163 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612179 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612194 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612210 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612318 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612340 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612356 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612388 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612616 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612637 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612654 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612677 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612694 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612710 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612725 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612748 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612768 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612785 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612803 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612839 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.612857 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618780 5123 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618864 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618892 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618904 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618920 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618933 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618949 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618963 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618975 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.618993 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619007 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619025 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619041 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619059 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619079 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619097 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619117 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619130 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619150 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.619173 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620153 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620192 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620308 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620332 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620355 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620376 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620399 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620423 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620440 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620463 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620480 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620506 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620527 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620559 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620583 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620602 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620623 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620642 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620661 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620678 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620694 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620718 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620732 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620755 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620770 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620783 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620800 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620816 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620836 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620861 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620880 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620896 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620910 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620927 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620940 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620958 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620972 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.620991 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621006 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621020 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621038 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621161 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621178 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621196 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621209 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621248 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621287 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621317 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621346 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621361 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621378 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621394 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621416 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621435 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621450 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621469 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621485 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621502 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621515 5123 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621529 5123 reconstruct.go:97] "Volume reconstruction finished" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.621540 5123 reconciler.go:26] "Reconciler: start to sync state" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.635120 5123 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.637940 5123 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.638068 5123 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.638163 5123 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.638284 5123 kubelet.go:2451] "Starting kubelet main sync loop" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.638424 5123 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.639410 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.640715 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.641977 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.642042 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.642091 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.646148 5123 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.646174 5123 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.646202 5123 state_mem.go:36] "Initialized new in-memory state store" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.652346 5123 policy_none.go:49] "None policy: Start" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.652385 5123 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.652401 5123 state_mem.go:35] "Initializing new in-memory state store" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.672678 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.699672 5123 manager.go:341] "Starting Device Plugin manager" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.700000 5123 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.700023 5123 server.go:85] "Starting device plugin registration server" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.700507 5123 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.700526 5123 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.700803 5123 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.701004 5123 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.701036 5123 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.709797 5123 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.709990 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.738989 5123 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.739244 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.740267 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.740311 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.740325 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741124 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741296 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741350 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741830 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741855 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741864 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741934 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.741988 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.742000 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.742897 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.743010 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.743049 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.743942 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.744053 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.744136 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.743984 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.744329 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.744348 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745104 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745137 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745267 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745699 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745744 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745746 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745774 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745786 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.745755 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.746461 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.746506 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.746529 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.747152 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.747179 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.747183 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.747210 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.747191 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.747237 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.748028 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.748083 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.748785 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.748890 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.748957 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.775309 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.795541 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.804204 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.805581 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.805646 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.805659 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.805692 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.806332 5123 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.810414 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.821618 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824026 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824071 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824100 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824377 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824406 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824421 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824441 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824484 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824505 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824551 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824710 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824764 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824778 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824799 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824813 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824829 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824868 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.824949 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.825348 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.825444 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.825451 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.825434 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.825751 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.825910 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.826050 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.826169 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.826295 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.826389 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.826522 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.826543 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.840662 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:31 crc kubenswrapper[5123]: E1212 15:19:31.845837 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.928698 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.928766 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.928788 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.928809 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.928828 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.928944 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.929072 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.929120 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.929166 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.929273 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933347 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933415 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933425 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933493 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933505 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933538 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933549 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933559 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933616 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933595 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933645 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933668 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933670 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933690 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933708 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933747 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933755 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933772 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933817 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933878 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933915 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:31 crc kubenswrapper[5123]: I1212 15:19:31.933954 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.007429 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.009108 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.009173 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.009187 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.009239 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.009957 5123 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.097096 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.112027 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.122745 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:32 crc kubenswrapper[5123]: W1212 15:19:32.137805 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-6db56af842971d3b78ae2d04588db2cf11b4a094e3ebf0c621f83be400c92e26 WatchSource:0}: Error finding container 6db56af842971d3b78ae2d04588db2cf11b4a094e3ebf0c621f83be400c92e26: Status 404 returned error can't find the container with id 6db56af842971d3b78ae2d04588db2cf11b4a094e3ebf0c621f83be400c92e26 Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.141051 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:32 crc kubenswrapper[5123]: W1212 15:19:32.141504 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-6609b4a7ab162471cd4b10b9f4af26f28ad7e1e73ede71a57c0bf8e673c178e2 WatchSource:0}: Error finding container 6609b4a7ab162471cd4b10b9f4af26f28ad7e1e73ede71a57c0bf8e673c178e2: Status 404 returned error can't find the container with id 6609b4a7ab162471cd4b10b9f4af26f28ad7e1e73ede71a57c0bf8e673c178e2 Dec 12 15:19:32 crc kubenswrapper[5123]: W1212 15:19:32.144560 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-216043a053b1b91da3c105bd291d44fa4d1d7697b825733638746bce009bf230 WatchSource:0}: Error finding container 216043a053b1b91da3c105bd291d44fa4d1d7697b825733638746bce009bf230: Status 404 returned error can't find the container with id 216043a053b1b91da3c105bd291d44fa4d1d7697b825733638746bce009bf230 Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.144712 5123 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.146005 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.176599 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.343722 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.454418 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.456377 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.457737 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.479410 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.479475 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.479489 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.479521 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.480106 5123 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.547731 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.647789 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"430edad23c1a8bf84cbe0df6847378b9698e777c6a0524e883e990dc0bc9d9f6"} Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.648948 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"459f589881472dc16d2acc68b9bb56cff29dd1367242d690c34101dcceb735f8"} Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.650568 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"216043a053b1b91da3c105bd291d44fa4d1d7697b825733638746bce009bf230"} Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.651859 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"6609b4a7ab162471cd4b10b9f4af26f28ad7e1e73ede71a57c0bf8e673c178e2"} Dec 12 15:19:32 crc kubenswrapper[5123]: I1212 15:19:32.652904 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"6db56af842971d3b78ae2d04588db2cf11b4a094e3ebf0c621f83be400c92e26"} Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.887673 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:19:32 crc kubenswrapper[5123]: E1212 15:19:32.977883 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.280979 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.283774 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.283859 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.283873 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.283904 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:19:33 crc kubenswrapper[5123]: E1212 15:19:33.284538 5123 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.540109 5123 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:19:33 crc kubenswrapper[5123]: E1212 15:19:33.541800 5123 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.547878 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.856460 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"90a36cde8f0155fd7e784fe62e8b6855d9e6067713b30d29b277dd7bc9506b03"} Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.858247 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6"} Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.858466 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.859159 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.859206 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.859241 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:33 crc kubenswrapper[5123]: E1212 15:19:33.859526 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.861690 5123 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="1fd0a88b9a42b5c1894a2293d709a598f4e23c1aacedf07ee3a9ece8074d29ea" exitCode=0 Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.861895 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.862114 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"1fd0a88b9a42b5c1894a2293d709a598f4e23c1aacedf07ee3a9ece8074d29ea"} Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.863614 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.863645 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.863656 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:33 crc kubenswrapper[5123]: E1212 15:19:33.863842 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.866260 5123 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="13ec8c878aa1edd7f7ea3a6bb1a6895c7ad6b6675171bab7edf369eb5dd7a266" exitCode=0 Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.866303 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"13ec8c878aa1edd7f7ea3a6bb1a6895c7ad6b6675171bab7edf369eb5dd7a266"} Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.866361 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.866848 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.866872 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.866882 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:33 crc kubenswrapper[5123]: E1212 15:19:33.867053 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.868998 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"d81558377f7693d3a49de48f1988e688884e346bef47364c2844d0f9ad8fd5eb"} Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.869183 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.870021 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.870066 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:33 crc kubenswrapper[5123]: I1212 15:19:33.870078 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:33 crc kubenswrapper[5123]: E1212 15:19:33.870280 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:34 crc kubenswrapper[5123]: E1212 15:19:34.247562 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:19:34 crc kubenswrapper[5123]: E1212 15:19:34.328242 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.547003 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:34 crc kubenswrapper[5123]: E1212 15:19:34.579734 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Dec 12 15:19:34 crc kubenswrapper[5123]: E1212 15:19:34.615798 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.874995 5123 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="d81558377f7693d3a49de48f1988e688884e346bef47364c2844d0f9ad8fd5eb" exitCode=0 Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.875089 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"d81558377f7693d3a49de48f1988e688884e346bef47364c2844d0f9ad8fd5eb"} Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.875168 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"fec0b545071ff72387a726c65b878b3cc3c54114436f814409a530a7b30c28c0"} Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.877613 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"73dfa58c2e8aff8d0309a8fb1e6d250887820cc494e9dea56b738621d5b92ce1"} Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.879485 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6" exitCode=0 Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.879803 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.880093 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6"} Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.880642 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.880695 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.880709 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:34 crc kubenswrapper[5123]: E1212 15:19:34.881133 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.882681 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.883297 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.883324 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.883334 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.883834 5123 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="5cca787d9fdc34bc1120be4a21fb6165c1108799e292f659ed9a30c36238056e" exitCode=0 Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.883911 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"5cca787d9fdc34bc1120be4a21fb6165c1108799e292f659ed9a30c36238056e"} Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.884097 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.885745 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.885774 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.885784 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:34 crc kubenswrapper[5123]: E1212 15:19:34.885968 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:34 crc kubenswrapper[5123]: I1212 15:19:34.888434 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.890200 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.890282 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.890295 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.890327 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:19:35 crc kubenswrapper[5123]: E1212 15:19:34.891008 5123 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.893535 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"1ef017f0eef1c51fa90d6de39f73c6270effb87f5deed367566d0ad9421d880f"} Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.893721 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.894371 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.894408 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:34.894419 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:35 crc kubenswrapper[5123]: E1212 15:19:34.894656 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:35 crc kubenswrapper[5123]: E1212 15:19:35.278410 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:35.662162 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:35.896744 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:35.898133 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:35.898174 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:35 crc kubenswrapper[5123]: I1212 15:19:35.898184 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:35 crc kubenswrapper[5123]: E1212 15:19:35.898739 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:36 crc kubenswrapper[5123]: I1212 15:19:36.607360 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.547001 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.682338 5123 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:19:37 crc kubenswrapper[5123]: E1212 15:19:37.684028 5123 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 15:19:37 crc kubenswrapper[5123]: E1212 15:19:37.781885 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="6.4s" Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.948730 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"194c064c4d0651f052c31b61cc496928c496f8605e7e9b9dbc7dfbc29498fe94"} Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.948866 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.949678 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.949717 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.949729 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:37 crc kubenswrapper[5123]: E1212 15:19:37.949977 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.953905 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"154fb3e3ce5de4d560b7ff2ad3ca84b8f7fa282b7af47effe0ee3b23ad996e4f"} Dec 12 15:19:37 crc kubenswrapper[5123]: I1212 15:19:37.957296 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc"} Dec 12 15:19:38 crc kubenswrapper[5123]: I1212 15:19:38.262123 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:38 crc kubenswrapper[5123]: I1212 15:19:38.263975 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:38 crc kubenswrapper[5123]: I1212 15:19:38.264200 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:38 crc kubenswrapper[5123]: I1212 15:19:38.264218 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:38 crc kubenswrapper[5123]: I1212 15:19:38.264377 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:19:38 crc kubenswrapper[5123]: E1212 15:19:38.265331 5123 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Dec 12 15:19:38 crc kubenswrapper[5123]: E1212 15:19:38.290422 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:19:38 crc kubenswrapper[5123]: I1212 15:19:38.639336 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:39 crc kubenswrapper[5123]: I1212 15:19:39.257941 5123 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="194c064c4d0651f052c31b61cc496928c496f8605e7e9b9dbc7dfbc29498fe94" exitCode=0 Dec 12 15:19:39 crc kubenswrapper[5123]: I1212 15:19:39.258137 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"194c064c4d0651f052c31b61cc496928c496f8605e7e9b9dbc7dfbc29498fe94"} Dec 12 15:19:39 crc kubenswrapper[5123]: E1212 15:19:39.285305 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:19:39 crc kubenswrapper[5123]: I1212 15:19:39.548132 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:39 crc kubenswrapper[5123]: E1212 15:19:39.809738 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:19:40 crc kubenswrapper[5123]: I1212 15:19:40.369059 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"02831f45290db1a1d2fe96203679aee8039426e4470ec48bfcf087e7d34e454f"} Dec 12 15:19:40 crc kubenswrapper[5123]: I1212 15:19:40.372440 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e5f5ca37a436009ecf8073fbd361e0a3bc762b5ecf0fb16faf92ba09c336922a"} Dec 12 15:19:40 crc kubenswrapper[5123]: I1212 15:19:40.372862 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:40 crc kubenswrapper[5123]: I1212 15:19:40.373898 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:40 crc kubenswrapper[5123]: I1212 15:19:40.373951 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:40 crc kubenswrapper[5123]: I1212 15:19:40.373964 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:40 crc kubenswrapper[5123]: E1212 15:19:40.374267 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:40 crc kubenswrapper[5123]: I1212 15:19:40.700331 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:40 crc kubenswrapper[5123]: E1212 15:19:40.939014 5123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188080e43538041e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.562996766 +0000 UTC m=+0.372949277,LastTimestamp:2025-12-12 15:19:31.562996766 +0000 UTC m=+0.372949277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.494182 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3ab77a37e33bb3c89c48009a92c9ec8d9b3251462d2094bc09d36304905f2864"} Dec 12 15:19:41 crc kubenswrapper[5123]: E1212 15:19:41.733156 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:19:41 crc kubenswrapper[5123]: E1212 15:19:41.733549 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.733570 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.735771 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9"} Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.741051 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.741484 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"85810fde5314851f472de729604032c4393454ab56c99f7d0c8f68db47a2ce2b"} Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.741538 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.743473 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.743503 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:41 crc kubenswrapper[5123]: I1212 15:19:41.743538 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:41 crc kubenswrapper[5123]: E1212 15:19:41.744039 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.547149 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.751762 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f211c54a19b58b5792cfc535e48c1fc788e339590c93985f56907a6ef3218bce"} Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.751907 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.752516 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.752584 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.752594 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:42 crc kubenswrapper[5123]: E1212 15:19:42.752854 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.753767 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83"} Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.757761 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"623d61420ea353df65a0492fd9ca49b279feb02a781281bd1668e1f04db68b54"} Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.757866 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.758688 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.758714 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:42 crc kubenswrapper[5123]: I1212 15:19:42.758724 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:42 crc kubenswrapper[5123]: E1212 15:19:42.759018 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:43 crc kubenswrapper[5123]: I1212 15:19:43.629680 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:43 crc kubenswrapper[5123]: I1212 15:19:43.879058 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301"} Dec 12 15:19:43 crc kubenswrapper[5123]: I1212 15:19:43.881835 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1a34d581eecc9547b181c185a5046352353babf0efa3327710bceac6d88f2f5c"} Dec 12 15:19:43 crc kubenswrapper[5123]: I1212 15:19:43.882027 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:43 crc kubenswrapper[5123]: I1212 15:19:43.882808 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:43 crc kubenswrapper[5123]: I1212 15:19:43.882839 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:43 crc kubenswrapper[5123]: I1212 15:19:43.882849 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:43 crc kubenswrapper[5123]: E1212 15:19:43.883173 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.666312 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.667418 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.667473 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.667488 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.667519 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.888419 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.888703 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7bc134fd3c197c3c40519d9c2a110bdf4ebc73ddb6e4aa259f0bfe308ced31dc"} Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.889032 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.889073 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.889083 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:44 crc kubenswrapper[5123]: E1212 15:19:44.889370 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.893510 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"84a6190600d909f264afa90eaf73f0475b5fd2c8cfd699f98afe86f0dad15b60"} Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.893623 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.893669 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.894349 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.894395 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.894409 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.894381 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.894523 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:44 crc kubenswrapper[5123]: I1212 15:19:44.894549 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:44 crc kubenswrapper[5123]: E1212 15:19:44.894714 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:44 crc kubenswrapper[5123]: E1212 15:19:44.895016 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.896642 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.896764 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.896777 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.897983 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.898052 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.898065 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.898020 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.898138 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:45 crc kubenswrapper[5123]: I1212 15:19:45.898153 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:45 crc kubenswrapper[5123]: E1212 15:19:45.898643 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:45 crc kubenswrapper[5123]: E1212 15:19:45.899081 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.005449 5123 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.598122 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.607362 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899031 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899160 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899850 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899880 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899891 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899910 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899936 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:46 crc kubenswrapper[5123]: I1212 15:19:46.899945 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:46 crc kubenswrapper[5123]: E1212 15:19:46.900382 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:46 crc kubenswrapper[5123]: E1212 15:19:46.900670 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.376287 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.754520 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.754804 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.755751 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.755794 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.755809 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:47 crc kubenswrapper[5123]: E1212 15:19:47.756122 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.901456 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.902131 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.902191 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:47 crc kubenswrapper[5123]: I1212 15:19:47.902252 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:47 crc kubenswrapper[5123]: E1212 15:19:47.902684 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.334961 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.335330 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.336542 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.336596 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.336609 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:48 crc kubenswrapper[5123]: E1212 15:19:48.337055 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.341971 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.904710 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.904710 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.904999 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.905517 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.905565 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.905593 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:48 crc kubenswrapper[5123]: E1212 15:19:48.905990 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.906131 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.906195 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.906210 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:48 crc kubenswrapper[5123]: E1212 15:19:48.907071 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:48 crc kubenswrapper[5123]: I1212 15:19:48.909393 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.907988 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.908656 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.908695 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.908708 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:49 crc kubenswrapper[5123]: E1212 15:19:49.909117 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.960363 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.960703 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.961872 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.961922 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:49 crc kubenswrapper[5123]: I1212 15:19:49.961932 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:49 crc kubenswrapper[5123]: E1212 15:19:49.962428 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:50 crc kubenswrapper[5123]: I1212 15:19:50.770879 5123 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 12 15:19:50 crc kubenswrapper[5123]: I1212 15:19:50.771136 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 12 15:19:50 crc kubenswrapper[5123]: I1212 15:19:50.910915 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:19:50 crc kubenswrapper[5123]: I1212 15:19:50.912171 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:19:50 crc kubenswrapper[5123]: I1212 15:19:50.912239 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:19:50 crc kubenswrapper[5123]: I1212 15:19:50.912257 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:19:51 crc kubenswrapper[5123]: E1212 15:19:51.019889 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:19:51 crc kubenswrapper[5123]: E1212 15:19:51.733616 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:19:53 crc kubenswrapper[5123]: I1212 15:19:53.779327 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 12 15:19:54 crc kubenswrapper[5123]: E1212 15:19:54.183995 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Dec 12 15:19:54 crc kubenswrapper[5123]: E1212 15:19:54.669543 5123 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 12 15:19:56 crc kubenswrapper[5123]: E1212 15:19:56.007954 5123 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 15:19:56 crc kubenswrapper[5123]: I1212 15:19:56.245387 5123 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:19:56 crc kubenswrapper[5123]: I1212 15:19:56.245667 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 15:19:56 crc kubenswrapper[5123]: I1212 15:19:56.267823 5123 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:19:56 crc kubenswrapper[5123]: I1212 15:19:56.267951 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 15:19:57 crc kubenswrapper[5123]: I1212 15:19:57.469493 5123 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]log ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]etcd ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-filter ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-apiextensions-informers ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-apiextensions-controllers ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/crd-informer-synced ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-system-namespaces-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 12 15:19:57 crc kubenswrapper[5123]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/bootstrap-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/start-kube-aggregator-informers ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/apiservice-registration-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/apiservice-discovery-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]autoregister-completion ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/apiservice-openapi-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 12 15:19:57 crc kubenswrapper[5123]: livez check failed Dec 12 15:19:57 crc kubenswrapper[5123]: I1212 15:19:57.471957 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.755037 5123 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.755167 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.916623 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.917384 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.919488 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.919806 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.920095 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:00 crc kubenswrapper[5123]: E1212 15:20:00.921395 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:00 crc kubenswrapper[5123]: I1212 15:20:00.933127 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.187026 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.230895 5123 trace.go:236] Trace[1972914433]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:19:47.485) (total time: 13744ms): Dec 12 15:20:01 crc kubenswrapper[5123]: Trace[1972914433]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 13744ms (15:20:01.230) Dec 12 15:20:01 crc kubenswrapper[5123]: Trace[1972914433]: [13.744716737s] [13.744716737s] END Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.230975 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.236124 5123 trace.go:236] Trace[2140306024]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:19:49.810) (total time: 11425ms): Dec 12 15:20:01 crc kubenswrapper[5123]: Trace[2140306024]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 11425ms (15:20:01.236) Dec 12 15:20:01 crc kubenswrapper[5123]: Trace[2140306024]: [11.425328252s] [11.425328252s] END Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.236165 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.236310 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.236353 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.236506 5123 trace.go:236] Trace[728873266]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:19:51.094) (total time: 10142ms): Dec 12 15:20:01 crc kubenswrapper[5123]: Trace[728873266]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 10142ms (15:20:01.236) Dec 12 15:20:01 crc kubenswrapper[5123]: Trace[728873266]: [10.142263882s] [10.142263882s] END Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.236378 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e43538041e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.562996766 +0000 UTC m=+0.372949277,LastTimestamp:2025-12-12 15:19:31.562996766 +0000 UTC m=+0.372949277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.236549 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.240863 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.245765 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.251056 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ef0773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,LastTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.256160 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e43d909d03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.703020803 +0000 UTC m=+0.512973314,LastTimestamp:2025-12-12 15:19:31.703020803 +0000 UTC m=+0.512973314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.260758 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439edc978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.740292169 +0000 UTC m=+0.550244680,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.276024 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ee5147\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.740319127 +0000 UTC m=+0.550271638,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.291089 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ef0773\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ef0773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,LastTimestamp:2025-12-12 15:19:31.740331726 +0000 UTC m=+0.550284237,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.298090 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439edc978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.741843758 +0000 UTC m=+0.551796269,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.304355 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ee5147\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.741860206 +0000 UTC m=+0.551812717,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.309590 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ef0773\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ef0773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,LastTimestamp:2025-12-12 15:19:31.741868726 +0000 UTC m=+0.551821237,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.311037 5123 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44702->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.311066 5123 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44698->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.311114 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44702->192.168.126.11:17697: read: connection reset by peer" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.311171 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44698->192.168.126.11:17697: read: connection reset by peer" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.316549 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439edc978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.741962598 +0000 UTC m=+0.551915109,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.323019 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ee5147\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.741994186 +0000 UTC m=+0.551946687,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.329594 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ef0773\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ef0773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,LastTimestamp:2025-12-12 15:19:31.742005235 +0000 UTC m=+0.551957746,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.335522 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439edc978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.744043756 +0000 UTC m=+0.553996267,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.336844 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ee5147\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.74412623 +0000 UTC m=+0.554078741,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.342837 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ef0773\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ef0773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,LastTimestamp:2025-12-12 15:19:31.744233971 +0000 UTC m=+0.554186492,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.348505 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439edc978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.744312895 +0000 UTC m=+0.554265406,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.353199 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ee5147\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.744338473 +0000 UTC m=+0.554290984,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.358816 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ef0773\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ef0773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,LastTimestamp:2025-12-12 15:19:31.744355602 +0000 UTC m=+0.554308123,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.364439 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439edc978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.745724986 +0000 UTC m=+0.555677497,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.369929 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ee5147\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.745750464 +0000 UTC m=+0.555702975,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.375127 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439edc978\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439edc978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642018168 +0000 UTC m=+0.451970679,LastTimestamp:2025-12-12 15:19:31.745761543 +0000 UTC m=+0.555714054,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.380640 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ee5147\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ee5147 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642052935 +0000 UTC m=+0.452005446,LastTimestamp:2025-12-12 15:19:31.745781821 +0000 UTC m=+0.555734332,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.386776 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188080e439ef0773\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188080e439ef0773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:31.642099571 +0000 UTC m=+0.452052082,LastTimestamp:2025-12-12 15:19:31.74579154 +0000 UTC m=+0.555744051,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.392740 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080e457e9d36d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:32.145075053 +0000 UTC m=+0.955027564,LastTimestamp:2025-12-12 15:19:32.145075053 +0000 UTC m=+0.955027564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.397732 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e4583b7450 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:32.150424656 +0000 UTC m=+0.960377167,LastTimestamp:2025-12-12 15:19:32.150424656 +0000 UTC m=+0.960377167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.402056 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e458447f68 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:32.15101732 +0000 UTC m=+0.960969831,LastTimestamp:2025-12-12 15:19:32.15101732 +0000 UTC m=+0.960969831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.406121 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e459585637 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:32.169094711 +0000 UTC m=+0.979047212,LastTimestamp:2025-12-12 15:19:32.169094711 +0000 UTC m=+0.979047212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.410026 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e459588282 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:32.16910605 +0000 UTC m=+0.979058561,LastTimestamp:2025-12-12 15:19:32.16910605 +0000 UTC m=+0.979058561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.415264 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e4a9628c44 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.511941188 +0000 UTC m=+2.321893699,LastTimestamp:2025-12-12 15:19:33.511941188 +0000 UTC m=+2.321893699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.420145 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e4a9775176 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.51330239 +0000 UTC m=+2.323254891,LastTimestamp:2025-12-12 15:19:33.51330239 +0000 UTC m=+2.323254891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.428437 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e4a979ee0b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.513473547 +0000 UTC m=+2.323426058,LastTimestamp:2025-12-12 15:19:33.513473547 +0000 UTC m=+2.323426058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.433720 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e4a980f375 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.513933685 +0000 UTC m=+2.323886196,LastTimestamp:2025-12-12 15:19:33.513933685 +0000 UTC m=+2.323886196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.438759 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080e4a98a95ce openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.51456507 +0000 UTC m=+2.324517581,LastTimestamp:2025-12-12 15:19:33.51456507 +0000 UTC m=+2.324517581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.443469 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e4aabbd82e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.534570542 +0000 UTC m=+2.344523053,LastTimestamp:2025-12-12 15:19:33.534570542 +0000 UTC m=+2.344523053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.448195 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e4aadba19a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.536653722 +0000 UTC m=+2.346606233,LastTimestamp:2025-12-12 15:19:33.536653722 +0000 UTC m=+2.346606233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.453114 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080e4aadc8cba openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.536713914 +0000 UTC m=+2.346666425,LastTimestamp:2025-12-12 15:19:33.536713914 +0000 UTC m=+2.346666425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.458244 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e4aadd91fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.536780797 +0000 UTC m=+2.346733308,LastTimestamp:2025-12-12 15:19:33.536780797 +0000 UTC m=+2.346733308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.462431 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e4aae399fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.537176062 +0000 UTC m=+2.347128573,LastTimestamp:2025-12-12 15:19:33.537176062 +0000 UTC m=+2.347128573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.466946 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e4aafebae9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.538953961 +0000 UTC m=+2.348906472,LastTimestamp:2025-12-12 15:19:33.538953961 +0000 UTC m=+2.348906472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.471690 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e4be6ef711 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.865076497 +0000 UTC m=+2.675029008,LastTimestamp:2025-12-12 15:19:33.865076497 +0000 UTC m=+2.675029008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.476695 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080e4bea0d3fa openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.868344314 +0000 UTC m=+2.678296825,LastTimestamp:2025-12-12 15:19:33.868344314 +0000 UTC m=+2.678296825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.482164 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e4beca365e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:33.871056478 +0000 UTC m=+2.681008989,LastTimestamp:2025-12-12 15:19:33.871056478 +0000 UTC m=+2.681008989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.486255 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080e4f36defbc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.754201532 +0000 UTC m=+3.564154043,LastTimestamp:2025-12-12 15:19:34.754201532 +0000 UTC m=+3.564154043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.491798 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e4f3a1c906 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.757599494 +0000 UTC m=+3.567552005,LastTimestamp:2025-12-12 15:19:34.757599494 +0000 UTC m=+3.567552005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.496701 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e4f3a33d1b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.757694747 +0000 UTC m=+3.567647258,LastTimestamp:2025-12-12 15:19:34.757694747 +0000 UTC m=+3.567647258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.501227 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e4f3a36c42 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.757706818 +0000 UTC m=+3.567659329,LastTimestamp:2025-12-12 15:19:34.757706818 +0000 UTC m=+3.567659329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.506361 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e4f46d019b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.770917787 +0000 UTC m=+3.580870298,LastTimestamp:2025-12-12 15:19:34.770917787 +0000 UTC m=+3.580870298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.510593 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188080e4f48252a8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.772314792 +0000 UTC m=+3.582267303,LastTimestamp:2025-12-12 15:19:34.772314792 +0000 UTC m=+3.582267303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.515790 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e4f4848659 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.772459097 +0000 UTC m=+3.582411608,LastTimestamp:2025-12-12 15:19:34.772459097 +0000 UTC m=+3.582411608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.520200 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e4f56ed7d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.78781538 +0000 UTC m=+3.597767891,LastTimestamp:2025-12-12 15:19:34.78781538 +0000 UTC m=+3.597767891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.524932 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e4f636f4c1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.800929985 +0000 UTC m=+3.610882496,LastTimestamp:2025-12-12 15:19:34.800929985 +0000 UTC m=+3.610882496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.529399 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e4f91f0cd0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.849694928 +0000 UTC m=+3.659647439,LastTimestamp:2025-12-12 15:19:34.849694928 +0000 UTC m=+3.659647439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.535747 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e4fb1396e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:34.882498274 +0000 UTC m=+3.692450785,LastTimestamp:2025-12-12 15:19:34.882498274 +0000 UTC m=+3.692450785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.542032 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e51261ebe7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:35.273507815 +0000 UTC m=+4.083460316,LastTimestamp:2025-12-12 15:19:35.273507815 +0000 UTC m=+4.083460316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.544358 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e59c1e4746 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.584330566 +0000 UTC m=+6.394283077,LastTimestamp:2025-12-12 15:19:37.584330566 +0000 UTC m=+6.394283077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.546831 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e59c1ec8d9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.584363737 +0000 UTC m=+6.394316258,LastTimestamp:2025-12-12 15:19:37.584363737 +0000 UTC m=+6.394316258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.548189 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.549573 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e59c28cd9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.585020318 +0000 UTC m=+6.394972829,LastTimestamp:2025-12-12 15:19:37.585020318 +0000 UTC m=+6.394972829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.554854 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e5a4914192 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.726083474 +0000 UTC m=+6.536035985,LastTimestamp:2025-12-12 15:19:37.726083474 +0000 UTC m=+6.536035985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.559435 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e5a4e3e78b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.731499915 +0000 UTC m=+6.541452436,LastTimestamp:2025-12-12 15:19:37.731499915 +0000 UTC m=+6.541452436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.563509 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e5a4fd87ac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.733179308 +0000 UTC m=+6.543131819,LastTimestamp:2025-12-12 15:19:37.733179308 +0000 UTC m=+6.543131819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.567504 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e5a680ae61 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.758551649 +0000 UTC m=+6.568504160,LastTimestamp:2025-12-12 15:19:37.758551649 +0000 UTC m=+6.568504160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.572523 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e5a6b76581 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.762137473 +0000 UTC m=+6.572089984,LastTimestamp:2025-12-12 15:19:37.762137473 +0000 UTC m=+6.572089984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.576966 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e5b20158d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:37.951533266 +0000 UTC m=+6.761485777,LastTimestamp:2025-12-12 15:19:37.951533266 +0000 UTC m=+6.761485777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.581275 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e600d956ac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:39.27431134 +0000 UTC m=+8.084263871,LastTimestamp:2025-12-12 15:19:39.27431134 +0000 UTC m=+8.084263871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.585569 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188080e602a706a5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:39.304568485 +0000 UTC m=+8.114520996,LastTimestamp:2025-12-12 15:19:39.304568485 +0000 UTC m=+8.114520996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.590425 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e60ecd76d4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:39.508414164 +0000 UTC m=+8.318366675,LastTimestamp:2025-12-12 15:19:39.508414164 +0000 UTC m=+8.318366675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.594970 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e619e9c69c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:39.694818972 +0000 UTC m=+8.504771493,LastTimestamp:2025-12-12 15:19:39.694818972 +0000 UTC m=+8.504771493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.599797 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e61a24af8e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:39.698679694 +0000 UTC m=+8.508632195,LastTimestamp:2025-12-12 15:19:39.698679694 +0000 UTC m=+8.508632195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.604283 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e6649bc542 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:40.947998018 +0000 UTC m=+9.757950529,LastTimestamp:2025-12-12 15:19:40.947998018 +0000 UTC m=+9.757950529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.608839 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e664b167ac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:40.949415852 +0000 UTC m=+9.759368363,LastTimestamp:2025-12-12 15:19:40.949415852 +0000 UTC m=+9.759368363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.613760 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e665d4359e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:40.968474014 +0000 UTC m=+9.778426545,LastTimestamp:2025-12-12 15:19:40.968474014 +0000 UTC m=+9.778426545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.617758 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e665e085fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:40.96928102 +0000 UTC m=+9.779233531,LastTimestamp:2025-12-12 15:19:40.96928102 +0000 UTC m=+9.779233531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.622491 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e665efd04a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:40.970283082 +0000 UTC m=+9.780235613,LastTimestamp:2025-12-12 15:19:40.970283082 +0000 UTC m=+9.780235613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.626951 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e665fdcbb2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:40.97119941 +0000 UTC m=+9.781151931,LastTimestamp:2025-12-12 15:19:40.97119941 +0000 UTC m=+9.781151931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.630690 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e66838e603 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:41.008627203 +0000 UTC m=+9.818579714,LastTimestamp:2025-12-12 15:19:41.008627203 +0000 UTC m=+9.818579714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.635955 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e66b7fffb9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:41.063618489 +0000 UTC m=+9.873571000,LastTimestamp:2025-12-12 15:19:41.063618489 +0000 UTC m=+9.873571000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.640573 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e66c05486a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:41.072353386 +0000 UTC m=+9.882305927,LastTimestamp:2025-12-12 15:19:41.072353386 +0000 UTC m=+9.882305927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.646698 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6b26f4626 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.253704742 +0000 UTC m=+11.063657253,LastTimestamp:2025-12-12 15:19:42.253704742 +0000 UTC m=+11.063657253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.648728 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6b4375678 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.283593336 +0000 UTC m=+11.093545857,LastTimestamp:2025-12-12 15:19:42.283593336 +0000 UTC m=+11.093545857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.652709 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6b450a396 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.285251478 +0000 UTC m=+11.095203989,LastTimestamp:2025-12-12 15:19:42.285251478 +0000 UTC m=+11.095203989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.656749 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e6b450d37b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.285263739 +0000 UTC m=+11.095216250,LastTimestamp:2025-12-12 15:19:42.285263739 +0000 UTC m=+11.095216250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.661185 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e6b71b9b30 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.332107568 +0000 UTC m=+11.142060079,LastTimestamp:2025-12-12 15:19:42.332107568 +0000 UTC m=+11.142060079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.665600 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e6ba4f1031 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.385811505 +0000 UTC m=+11.195764016,LastTimestamp:2025-12-12 15:19:42.385811505 +0000 UTC m=+11.195764016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.670267 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.670426 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e6ba81bc98 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.38913244 +0000 UTC m=+11.199084951,LastTimestamp:2025-12-12 15:19:42.38913244 +0000 UTC m=+11.199084951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.671828 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.671875 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.671885 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.671915 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.674820 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e6ba993632 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.390670898 +0000 UTC m=+11.200623419,LastTimestamp:2025-12-12 15:19:42.390670898 +0000 UTC m=+11.200623419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.676269 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e6d79a4160 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.87727856 +0000 UTC m=+11.687231081,LastTimestamp:2025-12-12 15:19:42.87727856 +0000 UTC m=+11.687231081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.676464 5123 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.678988 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6d819110f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.885589263 +0000 UTC m=+11.695541774,LastTimestamp:2025-12-12 15:19:42.885589263 +0000 UTC m=+11.695541774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.680276 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e6d9f21624 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.916589092 +0000 UTC m=+11.726541603,LastTimestamp:2025-12-12 15:19:42.916589092 +0000 UTC m=+11.726541603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.681348 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6da1c537d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.919357309 +0000 UTC m=+11.729309810,LastTimestamp:2025-12-12 15:19:42.919357309 +0000 UTC m=+11.729309810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.683253 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e6da54413e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.923022654 +0000 UTC m=+11.732975175,LastTimestamp:2025-12-12 15:19:42.923022654 +0000 UTC m=+11.732975175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.685761 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6da649575 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.924092789 +0000 UTC m=+11.734045310,LastTimestamp:2025-12-12 15:19:42.924092789 +0000 UTC m=+11.734045310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.686916 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e70f6f71bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.813996989 +0000 UTC m=+12.623949500,LastTimestamp:2025-12-12 15:19:43.813996989 +0000 UTC m=+12.623949500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.691153 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e71495f0b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.900405938 +0000 UTC m=+12.710358449,LastTimestamp:2025-12-12 15:19:43.900405938 +0000 UTC m=+12.710358449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.695609 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e7150bb328 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.908123432 +0000 UTC m=+12.718075953,LastTimestamp:2025-12-12 15:19:43.908123432 +0000 UTC m=+12.718075953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.699476 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188080e71776f78c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.948707724 +0000 UTC m=+12.758660235,LastTimestamp:2025-12-12 15:19:43.948707724 +0000 UTC m=+12.758660235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.705809 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 15:20:01 crc kubenswrapper[5123]: &Event{ObjectMeta:{kube-controller-manager-crc.188080e8ae1c3705 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 12 15:20:01 crc kubenswrapper[5123]: body: Dec 12 15:20:01 crc kubenswrapper[5123]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:50.771087109 +0000 UTC m=+19.581039620,LastTimestamp:2025-12-12 15:19:50.771087109 +0000 UTC m=+19.581039620,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:20:01 crc kubenswrapper[5123]: > Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.709938 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080e8ae1fd966 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:50.771325286 +0000 UTC m=+19.581277797,LastTimestamp:2025-12-12 15:19:50.771325286 +0000 UTC m=+19.581277797,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.714457 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:20:01 crc kubenswrapper[5123]: &Event{ObjectMeta:{kube-apiserver-crc.188080e9f469bbb5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 15:20:01 crc kubenswrapper[5123]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:20:01 crc kubenswrapper[5123]: Dec 12 15:20:01 crc kubenswrapper[5123]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:56.245539765 +0000 UTC m=+25.055492306,LastTimestamp:2025-12-12 15:19:56.245539765 +0000 UTC m=+25.055492306,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:20:01 crc kubenswrapper[5123]: > Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.718109 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e9f46cb5c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:56.245734851 +0000 UTC m=+25.055687362,LastTimestamp:2025-12-12 15:19:56.245734851 +0000 UTC m=+25.055687362,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.722529 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e9f469bbb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:20:01 crc kubenswrapper[5123]: &Event{ObjectMeta:{kube-apiserver-crc.188080e9f469bbb5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 15:20:01 crc kubenswrapper[5123]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:20:01 crc kubenswrapper[5123]: Dec 12 15:20:01 crc kubenswrapper[5123]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:56.245539765 +0000 UTC m=+25.055492306,LastTimestamp:2025-12-12 15:19:56.267909671 +0000 UTC m=+25.077862182,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:20:01 crc kubenswrapper[5123]: > Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.726710 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e9f46cb5c3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e9f46cb5c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:56.245734851 +0000 UTC m=+25.055687362,LastTimestamp:2025-12-12 15:19:56.267991664 +0000 UTC m=+25.077944165,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.731185 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:20:01 crc kubenswrapper[5123]: &Event{ObjectMeta:{kube-apiserver-crc.188080ea3d82ca51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Dec 12 15:20:01 crc kubenswrapper[5123]: body: [+]ping ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]log ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]etcd ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-filter ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-apiextensions-informers ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-apiextensions-controllers ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/crd-informer-synced ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-system-namespaces-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 12 15:20:01 crc kubenswrapper[5123]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/bootstrap-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/start-kube-aggregator-informers ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/apiservice-registration-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/apiservice-discovery-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]autoregister-completion ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/apiservice-openapi-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 12 15:20:01 crc kubenswrapper[5123]: livez check failed Dec 12 15:20:01 crc kubenswrapper[5123]: Dec 12 15:20:01 crc kubenswrapper[5123]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:57.471918673 +0000 UTC m=+26.281871194,LastTimestamp:2025-12-12 15:19:57.471918673 +0000 UTC m=+26.281871194,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:20:01 crc kubenswrapper[5123]: > Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.733940 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.735845 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ea3d851e46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:57.472071238 +0000 UTC m=+26.282023759,LastTimestamp:2025-12-12 15:19:57.472071238 +0000 UTC m=+26.282023759,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.740684 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 15:20:01 crc kubenswrapper[5123]: &Event{ObjectMeta:{kube-controller-manager-crc.188080eb01349ffc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 12 15:20:01 crc kubenswrapper[5123]: body: Dec 12 15:20:01 crc kubenswrapper[5123]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:00.755130364 +0000 UTC m=+29.565082875,LastTimestamp:2025-12-12 15:20:00.755130364 +0000 UTC m=+29.565082875,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:20:01 crc kubenswrapper[5123]: > Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.743947 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.745446 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7bc134fd3c197c3c40519d9c2a110bdf4ebc73ddb6e4aa259f0bfe308ced31dc" exitCode=255 Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.745569 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7bc134fd3c197c3c40519d9c2a110bdf4ebc73ddb6e4aa259f0bfe308ced31dc"} Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.745748 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.745751 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.745957 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188080eb0135b2fe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:00.755200766 +0000 UTC m=+29.565153287,LastTimestamp:2025-12-12 15:20:00.755200766 +0000 UTC m=+29.565153287,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.746345 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.746392 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.746415 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.746661 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.746679 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.746687 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.746867 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.746922 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:01 crc kubenswrapper[5123]: I1212 15:20:01.747174 5123 scope.go:117] "RemoveContainer" containerID="7bc134fd3c197c3c40519d9c2a110bdf4ebc73ddb6e4aa259f0bfe308ced31dc" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.750902 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:20:01 crc kubenswrapper[5123]: &Event{ObjectMeta:{kube-apiserver-crc.188080eb2257b535 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44702->192.168.126.11:17697: read: connection reset by peer Dec 12 15:20:01 crc kubenswrapper[5123]: body: Dec 12 15:20:01 crc kubenswrapper[5123]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:01.311077685 +0000 UTC m=+30.121030196,LastTimestamp:2025-12-12 15:20:01.311077685 +0000 UTC m=+30.121030196,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:20:01 crc kubenswrapper[5123]: > Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.757708 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 15:20:01 crc kubenswrapper[5123]: &Event{ObjectMeta:{kube-apiserver-crc.188080eb22587a66 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44698->192.168.126.11:17697: read: connection reset by peer Dec 12 15:20:01 crc kubenswrapper[5123]: body: Dec 12 15:20:01 crc kubenswrapper[5123]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:01.311128166 +0000 UTC m=+30.121080677,LastTimestamp:2025-12-12 15:20:01.311128166 +0000 UTC m=+30.121080677,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:20:01 crc kubenswrapper[5123]: > Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.774004 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080eb2258d3f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44702->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:01.311151097 +0000 UTC m=+30.121103628,LastTimestamp:2025-12-12 15:20:01.311151097 +0000 UTC m=+30.121103628,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.782500 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080eb225a46ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44698->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:01.31124603 +0000 UTC m=+30.121198561,LastTimestamp:2025-12-12 15:20:01.31124603 +0000 UTC m=+30.121198561,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:01 crc kubenswrapper[5123]: E1212 15:20:01.792392 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e6da649575\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6da649575 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.924092789 +0000 UTC m=+11.734045310,LastTimestamp:2025-12-12 15:20:01.748550384 +0000 UTC m=+30.558502895,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:02 crc kubenswrapper[5123]: E1212 15:20:02.033432 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e70f6f71bd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e70f6f71bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.813996989 +0000 UTC m=+12.623949500,LastTimestamp:2025-12-12 15:20:02.02860958 +0000 UTC m=+30.838562091,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:02 crc kubenswrapper[5123]: E1212 15:20:02.044189 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e71495f0b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e71495f0b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.900405938 +0000 UTC m=+12.710358449,LastTimestamp:2025-12-12 15:20:02.039477594 +0000 UTC m=+30.849430105,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.382528 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.555476 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.761904 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.763956 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb"} Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.764261 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.765254 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.765294 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.765304 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:02 crc kubenswrapper[5123]: E1212 15:20:02.765802 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:02 crc kubenswrapper[5123]: I1212 15:20:02.768740 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.551782 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.770629 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.771880 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.773816 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb" exitCode=255 Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.773912 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb"} Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.773998 5123 scope.go:117] "RemoveContainer" containerID="7bc134fd3c197c3c40519d9c2a110bdf4ebc73ddb6e4aa259f0bfe308ced31dc" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.774298 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.775245 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.775283 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.775298 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:03 crc kubenswrapper[5123]: E1212 15:20:03.775674 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:03 crc kubenswrapper[5123]: I1212 15:20:03.776040 5123 scope.go:117] "RemoveContainer" containerID="407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb" Dec 12 15:20:03 crc kubenswrapper[5123]: E1212 15:20:03.776817 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:03 crc kubenswrapper[5123]: E1212 15:20:03.782328 5123 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ebb54ed1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,LastTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:04 crc kubenswrapper[5123]: I1212 15:20:04.675809 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:04 crc kubenswrapper[5123]: I1212 15:20:04.779522 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:20:04 crc kubenswrapper[5123]: I1212 15:20:04.788836 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:04 crc kubenswrapper[5123]: I1212 15:20:04.790139 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:04 crc kubenswrapper[5123]: I1212 15:20:04.790318 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:04 crc kubenswrapper[5123]: I1212 15:20:04.790412 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:04 crc kubenswrapper[5123]: E1212 15:20:04.790875 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:04 crc kubenswrapper[5123]: I1212 15:20:04.791304 5123 scope.go:117] "RemoveContainer" containerID="407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb" Dec 12 15:20:04 crc kubenswrapper[5123]: E1212 15:20:04.791654 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:04 crc kubenswrapper[5123]: E1212 15:20:04.796833 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ebb54ed1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ebb54ed1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,LastTimestamp:2025-12-12 15:20:04.791620765 +0000 UTC m=+33.601573276,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:05 crc kubenswrapper[5123]: I1212 15:20:05.630812 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.188112 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.563159 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.763933 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.764336 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.766120 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.766258 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.766284 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:07 crc kubenswrapper[5123]: E1212 15:20:07.766960 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:07 crc kubenswrapper[5123]: I1212 15:20:07.771771 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:20:08 crc kubenswrapper[5123]: E1212 15:20:08.194470 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.306027 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.306858 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.306914 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.306943 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:08 crc kubenswrapper[5123]: E1212 15:20:08.307474 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.656976 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.677521 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.678808 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.678872 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.678884 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:08 crc kubenswrapper[5123]: I1212 15:20:08.678912 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:20:08 crc kubenswrapper[5123]: E1212 15:20:08.693285 5123 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:20:09 crc kubenswrapper[5123]: I1212 15:20:09.550716 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:10 crc kubenswrapper[5123]: I1212 15:20:10.550745 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:11 crc kubenswrapper[5123]: I1212 15:20:11.230095 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:11 crc kubenswrapper[5123]: I1212 15:20:11.230490 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:11 crc kubenswrapper[5123]: I1212 15:20:11.231819 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:11 crc kubenswrapper[5123]: I1212 15:20:11.231868 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:11 crc kubenswrapper[5123]: I1212 15:20:11.231892 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:11 crc kubenswrapper[5123]: E1212 15:20:11.232387 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:11 crc kubenswrapper[5123]: I1212 15:20:11.232760 5123 scope.go:117] "RemoveContainer" containerID="407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb" Dec 12 15:20:11 crc kubenswrapper[5123]: E1212 15:20:11.233019 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:11 crc kubenswrapper[5123]: E1212 15:20:11.237635 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ebb54ed1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ebb54ed1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,LastTimestamp:2025-12-12 15:20:11.232987849 +0000 UTC m=+40.042940360,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:11 crc kubenswrapper[5123]: I1212 15:20:11.552456 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:11 crc kubenswrapper[5123]: E1212 15:20:11.830409 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.496887 5123 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.512977 5123 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.551992 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.765346 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.765742 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.767337 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.767381 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.767394 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:12 crc kubenswrapper[5123]: E1212 15:20:12.767781 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:12 crc kubenswrapper[5123]: I1212 15:20:12.768120 5123 scope.go:117] "RemoveContainer" containerID="407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb" Dec 12 15:20:12 crc kubenswrapper[5123]: E1212 15:20:12.768418 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:12 crc kubenswrapper[5123]: E1212 15:20:12.777066 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ebb54ed1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ebb54ed1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,LastTimestamp:2025-12-12 15:20:12.768372067 +0000 UTC m=+41.578324588,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:13 crc kubenswrapper[5123]: I1212 15:20:13.552284 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:14 crc kubenswrapper[5123]: I1212 15:20:14.650527 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:15 crc kubenswrapper[5123]: E1212 15:20:15.201329 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:20:15 crc kubenswrapper[5123]: I1212 15:20:15.552479 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:15 crc kubenswrapper[5123]: I1212 15:20:15.693765 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:15 crc kubenswrapper[5123]: I1212 15:20:15.695190 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:15 crc kubenswrapper[5123]: I1212 15:20:15.695260 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:15 crc kubenswrapper[5123]: I1212 15:20:15.695273 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:15 crc kubenswrapper[5123]: I1212 15:20:15.695309 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:20:15 crc kubenswrapper[5123]: E1212 15:20:15.705879 5123 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:20:16 crc kubenswrapper[5123]: I1212 15:20:16.552730 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:17 crc kubenswrapper[5123]: I1212 15:20:17.552847 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:18 crc kubenswrapper[5123]: I1212 15:20:18.553072 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:19 crc kubenswrapper[5123]: I1212 15:20:19.558327 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:20 crc kubenswrapper[5123]: I1212 15:20:20.552001 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:21 crc kubenswrapper[5123]: E1212 15:20:21.241868 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 15:20:21 crc kubenswrapper[5123]: I1212 15:20:21.571587 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:21 crc kubenswrapper[5123]: E1212 15:20:21.831316 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:20:22 crc kubenswrapper[5123]: E1212 15:20:22.022058 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 15:20:22 crc kubenswrapper[5123]: E1212 15:20:22.022690 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 15:20:22 crc kubenswrapper[5123]: E1212 15:20:22.206954 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.551979 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.706334 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.707541 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.707587 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.707599 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.707628 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:20:22 crc kubenswrapper[5123]: E1212 15:20:22.717626 5123 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.766017 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.766470 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.767436 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.767474 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:22 crc kubenswrapper[5123]: I1212 15:20:22.767488 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:22 crc kubenswrapper[5123]: E1212 15:20:22.767824 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:23 crc kubenswrapper[5123]: I1212 15:20:23.551511 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:23 crc kubenswrapper[5123]: I1212 15:20:23.639727 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:23 crc kubenswrapper[5123]: I1212 15:20:23.640844 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:23 crc kubenswrapper[5123]: I1212 15:20:23.640892 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:23 crc kubenswrapper[5123]: I1212 15:20:23.640904 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:23 crc kubenswrapper[5123]: E1212 15:20:23.641383 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:23 crc kubenswrapper[5123]: I1212 15:20:23.641715 5123 scope.go:117] "RemoveContainer" containerID="407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb" Dec 12 15:20:23 crc kubenswrapper[5123]: E1212 15:20:23.652340 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e6da649575\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e6da649575 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:42.924092789 +0000 UTC m=+11.734045310,LastTimestamp:2025-12-12 15:20:23.644646324 +0000 UTC m=+52.454598835,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:23 crc kubenswrapper[5123]: E1212 15:20:23.915043 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e70f6f71bd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e70f6f71bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.813996989 +0000 UTC m=+12.623949500,LastTimestamp:2025-12-12 15:20:23.908829885 +0000 UTC m=+52.718782396,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:23 crc kubenswrapper[5123]: E1212 15:20:23.929296 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080e71495f0b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080e71495f0b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:19:43.900405938 +0000 UTC m=+12.710358449,LastTimestamp:2025-12-12 15:20:23.922891744 +0000 UTC m=+52.732844255,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:24 crc kubenswrapper[5123]: E1212 15:20:24.118440 5123 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 15:20:24 crc kubenswrapper[5123]: I1212 15:20:24.552092 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:24 crc kubenswrapper[5123]: I1212 15:20:24.678873 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:20:24 crc kubenswrapper[5123]: I1212 15:20:24.681493 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19"} Dec 12 15:20:24 crc kubenswrapper[5123]: I1212 15:20:24.681780 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:24 crc kubenswrapper[5123]: I1212 15:20:24.682598 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:24 crc kubenswrapper[5123]: I1212 15:20:24.682681 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:24 crc kubenswrapper[5123]: I1212 15:20:24.682697 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:24 crc kubenswrapper[5123]: E1212 15:20:24.683153 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.551862 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.686135 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.686819 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.688978 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19" exitCode=255 Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.689058 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19"} Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.689103 5123 scope.go:117] "RemoveContainer" containerID="407205ad265de833748fefcb93c28e7bf80318be1627d09839bed4e757a9dbdb" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.689385 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.698047 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.698123 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.698139 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:25 crc kubenswrapper[5123]: E1212 15:20:25.699011 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:25 crc kubenswrapper[5123]: I1212 15:20:25.699743 5123 scope.go:117] "RemoveContainer" containerID="66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19" Dec 12 15:20:25 crc kubenswrapper[5123]: E1212 15:20:25.700152 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:25 crc kubenswrapper[5123]: E1212 15:20:25.706378 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ebb54ed1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ebb54ed1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,LastTimestamp:2025-12-12 15:20:25.700105264 +0000 UTC m=+54.510057775,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:26 crc kubenswrapper[5123]: I1212 15:20:26.552896 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:26 crc kubenswrapper[5123]: I1212 15:20:26.693952 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:20:27 crc kubenswrapper[5123]: I1212 15:20:27.552269 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:28 crc kubenswrapper[5123]: I1212 15:20:28.552176 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:29 crc kubenswrapper[5123]: E1212 15:20:29.209968 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:20:29 crc kubenswrapper[5123]: I1212 15:20:29.552680 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:29 crc kubenswrapper[5123]: I1212 15:20:29.718146 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:29 crc kubenswrapper[5123]: I1212 15:20:29.719561 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:29 crc kubenswrapper[5123]: I1212 15:20:29.719615 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:29 crc kubenswrapper[5123]: I1212 15:20:29.719630 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:29 crc kubenswrapper[5123]: I1212 15:20:29.719660 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:20:29 crc kubenswrapper[5123]: E1212 15:20:29.730634 5123 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:20:30 crc kubenswrapper[5123]: I1212 15:20:30.554354 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:31 crc kubenswrapper[5123]: I1212 15:20:31.229734 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:31 crc kubenswrapper[5123]: I1212 15:20:31.230137 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:31 crc kubenswrapper[5123]: I1212 15:20:31.231329 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:31 crc kubenswrapper[5123]: I1212 15:20:31.231471 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:31 crc kubenswrapper[5123]: I1212 15:20:31.231550 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:31 crc kubenswrapper[5123]: E1212 15:20:31.232041 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:31 crc kubenswrapper[5123]: I1212 15:20:31.232503 5123 scope.go:117] "RemoveContainer" containerID="66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19" Dec 12 15:20:31 crc kubenswrapper[5123]: E1212 15:20:31.232812 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:31 crc kubenswrapper[5123]: E1212 15:20:31.235672 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ebb54ed1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ebb54ed1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,LastTimestamp:2025-12-12 15:20:31.232779285 +0000 UTC m=+60.042731796,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:31 crc kubenswrapper[5123]: I1212 15:20:31.555504 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:31 crc kubenswrapper[5123]: E1212 15:20:31.831652 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:20:32 crc kubenswrapper[5123]: I1212 15:20:32.552193 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:33 crc kubenswrapper[5123]: I1212 15:20:33.555293 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:34 crc kubenswrapper[5123]: I1212 15:20:34.552935 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:34 crc kubenswrapper[5123]: I1212 15:20:34.682631 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:34 crc kubenswrapper[5123]: I1212 15:20:34.682997 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:34 crc kubenswrapper[5123]: I1212 15:20:34.684182 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:34 crc kubenswrapper[5123]: I1212 15:20:34.684263 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:34 crc kubenswrapper[5123]: I1212 15:20:34.684280 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:34 crc kubenswrapper[5123]: E1212 15:20:34.684819 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:34 crc kubenswrapper[5123]: I1212 15:20:34.685291 5123 scope.go:117] "RemoveContainer" containerID="66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19" Dec 12 15:20:34 crc kubenswrapper[5123]: E1212 15:20:34.685586 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:34 crc kubenswrapper[5123]: E1212 15:20:34.691833 5123 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188080ebb54ed1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188080ebb54ed1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:20:03.776745935 +0000 UTC m=+32.586698446,LastTimestamp:2025-12-12 15:20:34.685540922 +0000 UTC m=+63.495493443,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:20:35 crc kubenswrapper[5123]: I1212 15:20:35.552468 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:36 crc kubenswrapper[5123]: E1212 15:20:36.217172 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 15:20:36 crc kubenswrapper[5123]: I1212 15:20:36.555023 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:36 crc kubenswrapper[5123]: I1212 15:20:36.731842 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:36 crc kubenswrapper[5123]: I1212 15:20:36.732823 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:36 crc kubenswrapper[5123]: I1212 15:20:36.732885 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:36 crc kubenswrapper[5123]: I1212 15:20:36.732901 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:36 crc kubenswrapper[5123]: I1212 15:20:36.732930 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:20:36 crc kubenswrapper[5123]: E1212 15:20:36.742367 5123 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 15:20:37 crc kubenswrapper[5123]: I1212 15:20:37.552680 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:38 crc kubenswrapper[5123]: I1212 15:20:38.549250 5123 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-4fpg9" Dec 12 15:20:38 crc kubenswrapper[5123]: I1212 15:20:38.551142 5123 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 15:20:38 crc kubenswrapper[5123]: I1212 15:20:38.558612 5123 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-4fpg9" Dec 12 15:20:38 crc kubenswrapper[5123]: I1212 15:20:38.653450 5123 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 15:20:39 crc kubenswrapper[5123]: I1212 15:20:39.453406 5123 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 15:20:39 crc kubenswrapper[5123]: I1212 15:20:39.563605 5123 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-11 15:15:38 +0000 UTC" deadline="2026-01-04 23:04:57.701088163 +0000 UTC" Dec 12 15:20:39 crc kubenswrapper[5123]: I1212 15:20:39.563733 5123 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="559h44m18.137362153s" Dec 12 15:20:41 crc kubenswrapper[5123]: E1212 15:20:41.832485 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.743292 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.745331 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.745417 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.745432 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.745645 5123 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.761979 5123 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.762455 5123 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.762485 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.768715 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.768809 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.768825 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.768847 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.768862 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:43Z","lastTransitionTime":"2025-12-12T15:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.787700 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.800010 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.800080 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.800094 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.800117 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.800130 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:43Z","lastTransitionTime":"2025-12-12T15:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.818670 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.829425 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.829512 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.829536 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.829560 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.829575 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:43Z","lastTransitionTime":"2025-12-12T15:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.846278 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.857432 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.857506 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.857522 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.857546 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:43 crc kubenswrapper[5123]: I1212 15:20:43.857561 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:43Z","lastTransitionTime":"2025-12-12T15:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.872179 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.872402 5123 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.872448 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:43 crc kubenswrapper[5123]: E1212 15:20:43.973386 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.074346 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.174982 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.275276 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.377336 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.477643 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.578766 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.679497 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.780304 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.881284 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:44 crc kubenswrapper[5123]: E1212 15:20:44.982464 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:45 crc kubenswrapper[5123]: E1212 15:20:45.082906 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:45 crc kubenswrapper[5123]: E1212 15:20:45.183059 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:45 crc kubenswrapper[5123]: E1212 15:20:45.283780 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:45 crc kubenswrapper[5123]: E1212 15:20:45.384811 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:45 crc kubenswrapper[5123]: E1212 15:20:45.485318 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:45 crc kubenswrapper[5123]: E1212 15:20:45.586498 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:45 crc kubenswrapper[5123]: E1212 15:20:45.687304 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:45.788334 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.021200 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.121999 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.223001 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.323944 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.424729 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.525874 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.626044 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: I1212 15:20:46.640037 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:46 crc kubenswrapper[5123]: I1212 15:20:46.641576 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:46 crc kubenswrapper[5123]: I1212 15:20:46.641625 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:46 crc kubenswrapper[5123]: I1212 15:20:46.641641 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.642405 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:46 crc kubenswrapper[5123]: I1212 15:20:46.642912 5123 scope.go:117] "RemoveContainer" containerID="66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.726950 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.828146 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:46 crc kubenswrapper[5123]: E1212 15:20:46.928418 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.029519 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.130528 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.231329 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.331462 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.431804 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.532735 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.633638 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.734784 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: I1212 15:20:47.791482 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:20:47 crc kubenswrapper[5123]: I1212 15:20:47.793695 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db"} Dec 12 15:20:47 crc kubenswrapper[5123]: I1212 15:20:47.794027 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:47 crc kubenswrapper[5123]: I1212 15:20:47.795005 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:47 crc kubenswrapper[5123]: I1212 15:20:47.795056 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:47 crc kubenswrapper[5123]: I1212 15:20:47.795067 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.795636 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.835864 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:47 crc kubenswrapper[5123]: E1212 15:20:47.936695 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.037623 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.138582 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.239796 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.340607 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.441852 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.542331 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.643489 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.744339 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.799835 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.800366 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.801924 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" exitCode=255 Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.802035 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db"} Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.802109 5123 scope.go:117] "RemoveContainer" containerID="66cb36c6db4a4cc176139a7fd83683b8939f05d2d177a0bb231d70fc115b6b19" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.802461 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.803030 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.803062 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.803074 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.803623 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:48 crc kubenswrapper[5123]: I1212 15:20:48.803942 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.804246 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.845274 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:48 crc kubenswrapper[5123]: E1212 15:20:48.945558 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.046405 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.147390 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.248343 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.349580 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.450678 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.551400 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.652318 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.753541 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: I1212 15:20:49.827719 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.853865 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:49 crc kubenswrapper[5123]: E1212 15:20:49.962590 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.063248 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.164112 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.265416 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.365803 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.466851 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.567001 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.667119 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.770293 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.875799 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:50 crc kubenswrapper[5123]: E1212 15:20:50.976719 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.077307 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.178404 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: I1212 15:20:51.230001 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:51 crc kubenswrapper[5123]: I1212 15:20:51.230672 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:51 crc kubenswrapper[5123]: I1212 15:20:51.232199 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:51 crc kubenswrapper[5123]: I1212 15:20:51.232259 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:51 crc kubenswrapper[5123]: I1212 15:20:51.232271 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.232879 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:51 crc kubenswrapper[5123]: I1212 15:20:51.233204 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.233509 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.278999 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.380296 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.481318 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.581434 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.682002 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.782784 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.833652 5123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.883439 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:51 crc kubenswrapper[5123]: E1212 15:20:51.984082 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.084996 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.186033 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.286962 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.388100 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.489041 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.589883 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.690480 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.791506 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.892749 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:52 crc kubenswrapper[5123]: I1212 15:20:52.896947 5123 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:20:52 crc kubenswrapper[5123]: E1212 15:20:52.993395 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.093953 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.194146 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.294860 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.395934 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.496636 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.597568 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.698347 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.799182 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:53 crc kubenswrapper[5123]: E1212 15:20:53.899551 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.000476 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.100610 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.201061 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.213442 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.220141 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.220206 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.220235 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.220286 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.220303 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:54Z","lastTransitionTime":"2025-12-12T15:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.235834 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.237731 5123 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.243415 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.243475 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.243490 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.243511 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.243526 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:54Z","lastTransitionTime":"2025-12-12T15:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.258903 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.264719 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.264771 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.264793 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.264814 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.264827 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:54Z","lastTransitionTime":"2025-12-12T15:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.283063 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.289268 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.289687 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.289819 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.289953 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:54 crc kubenswrapper[5123]: I1212 15:20:54.290059 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:54Z","lastTransitionTime":"2025-12-12T15:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.303399 5123 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17e3227e-03aa-4fce-8c3b-5ddc14058574\\\",\\\"systemUUID\\\":\\\"3aaed2a9-d1af-4a24-a65e-046edb5e804c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.303573 5123 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.303609 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.403813 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.504796 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.605443 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.705608 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.806740 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:54 crc kubenswrapper[5123]: E1212 15:20:54.908316 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.008787 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.109897 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.210055 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.310558 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.410929 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.511085 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.611714 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: I1212 15:20:55.639271 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:55 crc kubenswrapper[5123]: I1212 15:20:55.640346 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:55 crc kubenswrapper[5123]: I1212 15:20:55.640380 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:55 crc kubenswrapper[5123]: I1212 15:20:55.640392 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.640797 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.712645 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.813852 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:55 crc kubenswrapper[5123]: E1212 15:20:55.914168 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.014450 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.115110 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.216238 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.317328 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.418714 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.518868 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.620019 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.720292 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.820861 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:56 crc kubenswrapper[5123]: E1212 15:20:56.921676 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.021996 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.122400 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.222928 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.323814 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.424486 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.525094 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.625942 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.727208 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: I1212 15:20:57.794616 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:57 crc kubenswrapper[5123]: I1212 15:20:57.795394 5123 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:20:57 crc kubenswrapper[5123]: I1212 15:20:57.796749 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:57 crc kubenswrapper[5123]: I1212 15:20:57.796928 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:57 crc kubenswrapper[5123]: I1212 15:20:57.797027 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.797756 5123 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 15:20:57 crc kubenswrapper[5123]: I1212 15:20:57.798245 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.798663 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.827841 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:57 crc kubenswrapper[5123]: E1212 15:20:57.928949 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.029089 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.129523 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.229675 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.329823 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.430271 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.530724 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.631605 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.732292 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.832452 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:58 crc kubenswrapper[5123]: E1212 15:20:58.933575 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:59 crc kubenswrapper[5123]: E1212 15:20:59.034322 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:59 crc kubenswrapper[5123]: E1212 15:20:59.134821 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:59 crc kubenswrapper[5123]: E1212 15:20:59.236045 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:59 crc kubenswrapper[5123]: E1212 15:20:59.336537 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:59 crc kubenswrapper[5123]: E1212 15:20:59.436847 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:59 crc kubenswrapper[5123]: E1212 15:20:59.537375 5123 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.575691 5123 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.642166 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.642318 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.642339 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.642367 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.642385 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:59Z","lastTransitionTime":"2025-12-12T15:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.673521 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.692620 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.744629 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.744962 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.745086 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.745354 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.745486 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:59Z","lastTransitionTime":"2025-12-12T15:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.793815 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.848118 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.848524 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.848667 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.848820 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.849040 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:59Z","lastTransitionTime":"2025-12-12T15:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.890259 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.952350 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.952408 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.952420 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.952441 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.952453 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:20:59Z","lastTransitionTime":"2025-12-12T15:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:20:59 crc kubenswrapper[5123]: I1212 15:20:59.991869 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.056620 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.056690 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.056704 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.056726 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.056739 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.159584 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.159632 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.159642 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.159659 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.159668 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.261883 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.261932 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.261941 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.261957 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.261967 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.365121 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.365756 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.365842 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.365956 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.366039 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.466553 5123 apiserver.go:52] "Watching apiserver" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.470695 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.471037 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.471142 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.471311 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.471431 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.477461 5123 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.478363 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-lvztx","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/network-metrics-daemon-hmprz","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-etcd/etcd-crc","openshift-multus/multus-additional-cni-plugins-z24lm","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/multus-27rm2","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-c7cpz","openshift-image-registry/node-ca-rd8p2","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-cs4j6","openshift-network-operator/iptables-alerter-5jnd7"] Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.480023 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.480913 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.481029 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.481779 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.481840 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.482671 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.483244 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.484243 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.486468 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.484444 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.484310 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.487056 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.489872 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.489947 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.490030 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.490402 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.490461 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.492408 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.494180 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.499915 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.500047 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.500118 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.500660 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.504771 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.505115 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.507022 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.507285 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.508496 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.518503 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.521706 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.522977 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.523623 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.524001 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.526610 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.527047 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.527246 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.527480 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.527488 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.527769 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.527875 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.527936 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.528168 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.528348 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.528624 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.529834 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.530428 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.531921 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.536395 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.536437 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.537471 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.542844 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.544622 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.544777 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.544778 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.545262 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.545511 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.546846 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.548782 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.549033 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.549475 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.550790 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.558870 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.572619 5123 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.573971 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.574034 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.574045 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.574062 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.574393 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.579160 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.595424 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.613104 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.627017 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rd8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45e8e58-84aa-4f67-b397-495c8339ce58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6nc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rd8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.640006 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-lvztx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c008d9e-3c97-4fcc-a448-c3c34829b24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gffd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lvztx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.644921 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645057 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645088 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645118 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645138 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645166 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645233 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645260 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645631 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645713 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645831 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645871 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645912 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.645940 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646164 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646193 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646258 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646289 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646334 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646397 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646471 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646498 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646521 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646536 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646579 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646724 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646726 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646820 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646865 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646893 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646914 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646936 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646943 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.646980 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647003 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647021 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647039 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647056 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647083 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647156 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647188 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647239 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647260 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647279 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647312 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647308 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647356 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647327 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647390 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647417 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647446 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647466 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647484 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647508 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647527 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647542 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647558 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647587 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647612 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647633 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647649 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647667 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647691 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647742 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647773 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647806 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647835 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647874 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647137 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647909 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.647971 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648032 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648138 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648165 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648194 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648231 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648252 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648279 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648303 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648324 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648373 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648404 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648425 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648451 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648472 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648495 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648518 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648542 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648564 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648587 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.648879 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.649151 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.649257 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.649343 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.649392 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.649930 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.649942 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650205 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650388 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650252 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650476 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650494 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650517 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650593 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650625 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650658 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650682 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650688 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650722 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650759 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650786 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650813 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650845 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650849 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650881 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650908 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650941 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650970 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.650997 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651025 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651052 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651081 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651106 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651130 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651154 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651178 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651202 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651247 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651260 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651309 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651341 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651368 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.651393 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.698372 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.698561 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.699032 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.699136 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.699385 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.699655 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.699958 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700012 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700164 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700127 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700337 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700372 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700397 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700369 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700504 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700539 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700581 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700629 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700668 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700696 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700733 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700785 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700821 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700903 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700937 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700974 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.701001 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.701036 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702543 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96b4a286-31bb-42a1-934a-56ea0da8024a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:20:48Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 15:20:48.086740 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:20:48.086994 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 15:20:48.088436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1823639014/tls.crt::/tmp/serving-cert-1823639014/tls.key\\\\\\\"\\\\nI1212 15:20:48.580604 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:20:48.583707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:20:48.583755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:20:48.583803 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:20:48.583812 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:20:48.588321 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:20:48.588373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:20:48.588382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:20:48.588388 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:20:48.588392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:20:48.588396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:20:48.588400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 15:20:48.588333 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 15:20:48.591304 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:20:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700495 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.700983 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.701027 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.701139 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.701586 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.701962 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702061 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.702068 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:01.201923845 +0000 UTC m=+90.011876356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710444 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710455 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710656 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710960 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710982 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711069 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711162 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711252 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711413 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711434 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702386 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702509 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702559 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702791 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702882 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711606 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.702893 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.703005 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.703100 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.703352 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.703965 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.704090 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.704385 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.704376 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.705122 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.705405 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706043 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706114 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706320 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706516 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706764 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706795 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706920 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.706950 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.709532 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.707180 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.707208 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711927 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.707565 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.707821 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.711947 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.708094 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.708115 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.709064 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.699174 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.709663 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.709304 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.709758 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710004 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710030 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710045 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710061 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710159 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.710375 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.709295 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.712355 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.712687 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.712719 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.712560 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.713071 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.713251 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.713687 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.701906 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714369 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714440 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714608 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714877 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714885 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714900 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715117 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715129 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715329 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714927 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715439 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.714937 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715641 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715649 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715668 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715639 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715404 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715692 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715699 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715731 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715819 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715861 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715888 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715921 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715946 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715972 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715997 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.716025 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.716049 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.716070 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.717366 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.717417 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718400 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718442 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718462 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718480 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718516 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718551 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718574 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718600 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718626 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718655 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718680 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718706 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718731 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718822 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718848 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718870 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718896 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718931 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718959 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718980 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719005 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719037 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719062 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719085 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719113 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719139 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719162 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719185 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719208 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719260 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719284 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719308 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.715522 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719335 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719360 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719385 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719407 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719435 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719463 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719484 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719506 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719533 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719560 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719588 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719614 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719791 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719889 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718484 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720696 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721131 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721179 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721204 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721249 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721273 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721294 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721317 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721338 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721368 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721390 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721412 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721434 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721453 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721476 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721500 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721521 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721550 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721599 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721622 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721640 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721663 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721690 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721724 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721753 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721784 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721810 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721858 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721889 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721908 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721929 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721949 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.717707 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.717776 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.716284 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.717837 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.717589 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718448 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718500 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.718259 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.717725 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719715 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.719714 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720052 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720259 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720482 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720244 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722128 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720572 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720587 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720785 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720949 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.720953 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721040 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721521 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721755 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.721891 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722150 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722073 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cni-binary-copy\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722344 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-ovn\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722380 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7c008d9e-3c97-4fcc-a448-c3c34829b24a-hosts-file\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722402 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-socket-dir-parent\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722416 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722451 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-etc-kubernetes\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722475 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722493 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722513 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-k8s-cni-cncf-io\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722531 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722558 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722574 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-script-lib\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722626 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722645 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6nc7\" (UniqueName: \"kubernetes.io/projected/f45e8e58-84aa-4f67-b397-495c8339ce58-kube-api-access-l6nc7\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722663 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722686 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722707 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722726 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f45e8e58-84aa-4f67-b397-495c8339ce58-serviceca\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722741 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-os-release\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722770 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722790 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-system-cni-dir\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722814 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f45e8e58-84aa-4f67-b397-495c8339ce58-host\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722831 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-cni-bin\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722846 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-bin\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722863 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-config\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722888 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722908 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-cnibin\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722921 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.722931 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-hostroot\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723022 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723179 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723416 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723434 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-env-overrides\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723467 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723471 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrl4z\" (UniqueName: \"kubernetes.io/projected/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-kube-api-access-jrl4z\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723502 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723611 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723742 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723743 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724251 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724255 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724482 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724507 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ef15793-fa49-4c37-a355-d4573977e301-cni-binary-copy\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724565 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ef15793-fa49-4c37-a355-d4573977e301-multus-daemon-config\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.723955 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724655 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724912 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.724922 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725313 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725073 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725358 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725474 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725566 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725630 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cnibin\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725781 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfltr\" (UniqueName: \"kubernetes.io/projected/4ba336c2-0d9e-485a-9785-761f97f2601a-kube-api-access-dfltr\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725880 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c008d9e-3c97-4fcc-a448-c3c34829b24a-tmp-dir\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725905 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725929 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-cni-multus\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725948 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-os-release\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725944 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725965 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsclv\" (UniqueName: \"kubernetes.io/projected/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-kube-api-access-lsclv\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.725984 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-kubelet\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726000 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-systemd\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726017 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726039 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n476s\" (UniqueName: \"kubernetes.io/projected/2d82c231-80e9-4268-8ec7-1ae260abe06c-kube-api-access-n476s\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726056 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-slash\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726073 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-var-lib-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726089 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-etc-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726106 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba336c2-0d9e-485a-9785-761f97f2601a-ovn-node-metrics-cert\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726128 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726146 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-proxy-tls\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726196 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-cni-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726204 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726236 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-conf-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726265 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726296 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726319 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-rootfs\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726474 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726537 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-log-socket\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726579 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-netns\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726603 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-multus-certs\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726622 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-netd\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726639 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726672 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726691 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-kubelet\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726709 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-systemd-units\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726725 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-node-log\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726742 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726775 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726799 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-system-cni-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726817 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bb4w\" (UniqueName: \"kubernetes.io/projected/3ef15793-fa49-4c37-a355-d4573977e301-kube-api-access-2bb4w\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726839 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-netns\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726856 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp998\" (UniqueName: \"kubernetes.io/projected/e6c3a697-51e4-44dd-a38c-3287db85ce50-kube-api-access-fp998\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726830 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726878 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffd7\" (UniqueName: \"kubernetes.io/projected/7c008d9e-3c97-4fcc-a448-c3c34829b24a-kube-api-access-gffd7\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726907 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726937 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.726959 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.727027 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.727039 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.727525 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.727886 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.728015 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.728102 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.728108 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.728353 5123 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729028 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729137 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729178 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729334 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729590 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729607 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729649 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729903 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.729955 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.729999 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:01.229965531 +0000 UTC m=+90.039918052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730016 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730370 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730416 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730520 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730531 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730724 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730723 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730672 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730876 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.730913 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.731704 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.731739 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.731761 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.731771 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.731875 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.731944 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.731973 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.732071 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.732337 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.732340 5123 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.732399 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.732643 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:01.232618464 +0000 UTC m=+90.042570975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.732925 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.733427 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734025 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734059 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.733561 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.733824 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.733603 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734138 5123 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734178 5123 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734196 5123 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734232 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734250 5123 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734264 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734278 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734292 5123 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734304 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734318 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734332 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734350 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734363 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734378 5123 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734394 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734405 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734403 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734417 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734499 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734515 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734529 5123 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734544 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734560 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734577 5123 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734591 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734604 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734619 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734633 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734647 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734663 5123 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734681 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734694 5123 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734708 5123 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734723 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734738 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734754 5123 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734768 5123 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734782 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734796 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734812 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734873 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734887 5123 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734900 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734912 5123 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734925 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734952 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734968 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.734987 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735006 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735025 5123 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735046 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735065 5123 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735078 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735092 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735105 5123 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735120 5123 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735140 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735160 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735173 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735187 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735209 5123 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735244 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735259 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735279 5123 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735291 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735305 5123 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735318 5123 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735333 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735348 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735337 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735361 5123 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735458 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735699 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735716 5123 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735728 5123 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735768 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735782 5123 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735795 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735810 5123 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735845 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735871 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735883 5123 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735897 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735937 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735951 5123 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735969 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735981 5123 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.735995 5123 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736020 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736033 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736045 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736097 5123 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736111 5123 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736123 5123 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736134 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736171 5123 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736187 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736199 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736211 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736320 5123 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736333 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736346 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736359 5123 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736371 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736404 5123 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736420 5123 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736433 5123 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736446 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736483 5123 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736500 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736514 5123 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736528 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736540 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736555 5123 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736591 5123 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736607 5123 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736620 5123 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736633 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736644 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736660 5123 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736672 5123 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736685 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736723 5123 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736742 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736757 5123 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736770 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736821 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736835 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736848 5123 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736861 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736874 5123 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736948 5123 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736976 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.736992 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737017 5123 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737032 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737046 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737058 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737071 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737085 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737097 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737109 5123 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737123 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737135 5123 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737147 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737160 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737173 5123 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737184 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.737346 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.738120 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.742817 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.742837 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.743022 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.743581 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.744164 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.744301 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.744324 5123 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.744457 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:01.244427403 +0000 UTC m=+90.054379914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.745429 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.745742 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.747409 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.747477 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.747696 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.755318 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.755509 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.755643 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.755663 5123 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.755690 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.755762 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:01.255733156 +0000 UTC m=+90.065685667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.757989 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.758082 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.758888 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.759028 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.763346 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.763729 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.763984 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.764581 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.767626 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.770173 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.770323 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.771633 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.778410 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.787206 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.799180 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.799345 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.805695 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.811530 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d82c231-80e9-4268-8ec7-1ae260abe06c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n476s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n476s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kbx8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.814733 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.825865 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jrl4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jrl4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cs4j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.828515 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.828680 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.828698 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.828722 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.828744 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.837699 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-netns\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.837760 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-multus-certs\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.837785 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-netd\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.837803 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.837888 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-netd\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838010 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-kubelet\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838075 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-kubelet\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.837982 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-netns\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.838151 5123 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838165 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-multus-certs\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838085 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-systemd-units\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: E1212 15:21:00.838263 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs podName:e6c3a697-51e4-44dd-a38c-3287db85ce50 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:01.338234333 +0000 UTC m=+90.148186844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs") pod "network-metrics-daemon-hmprz" (UID: "e6c3a697-51e4-44dd-a38c-3287db85ce50") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838156 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-systemd-units\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838228 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-node-log\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838499 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838509 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-node-log\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838539 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-system-cni-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838568 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2bb4w\" (UniqueName: \"kubernetes.io/projected/3ef15793-fa49-4c37-a355-d4573977e301-kube-api-access-2bb4w\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838605 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-netns\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838628 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fp998\" (UniqueName: \"kubernetes.io/projected/e6c3a697-51e4-44dd-a38c-3287db85ce50-kube-api-access-fp998\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838653 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gffd7\" (UniqueName: \"kubernetes.io/projected/7c008d9e-3c97-4fcc-a448-c3c34829b24a-kube-api-access-gffd7\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838688 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838754 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cni-binary-copy\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838780 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-system-cni-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.838832 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-ovn\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839006 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-ovn\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839289 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839437 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7c008d9e-3c97-4fcc-a448-c3c34829b24a-hosts-file\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839465 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-socket-dir-parent\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839495 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-etc-kubernetes\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839516 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839543 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839563 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-k8s-cni-cncf-io\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839580 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839625 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-run-k8s-cni-cncf-io\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839649 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839669 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-script-lib\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839697 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6nc7\" (UniqueName: \"kubernetes.io/projected/f45e8e58-84aa-4f67-b397-495c8339ce58-kube-api-access-l6nc7\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839715 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839751 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-etc-kubernetes\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.839838 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-netns\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840070 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840136 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-socket-dir-parent\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840331 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840401 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7c008d9e-3c97-4fcc-a448-c3c34829b24a-hosts-file\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840477 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840782 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840839 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f45e8e58-84aa-4f67-b397-495c8339ce58-serviceca\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840883 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-os-release\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840924 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-system-cni-dir\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.840976 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f45e8e58-84aa-4f67-b397-495c8339ce58-host\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841005 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-cni-bin\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841053 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-bin\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841082 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-config\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841106 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841129 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-cnibin\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841153 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-hostroot\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841197 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841232 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cni-binary-copy\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841284 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-env-overrides\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841307 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-script-lib\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841317 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jrl4z\" (UniqueName: \"kubernetes.io/projected/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-kube-api-access-jrl4z\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841355 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ef15793-fa49-4c37-a355-d4573977e301-cni-binary-copy\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841396 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ef15793-fa49-4c37-a355-d4573977e301-multus-daemon-config\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841361 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841356 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-hostroot\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841395 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f45e8e58-84aa-4f67-b397-495c8339ce58-host\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841791 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cnibin\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841830 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfltr\" (UniqueName: \"kubernetes.io/projected/4ba336c2-0d9e-485a-9785-761f97f2601a-kube-api-access-dfltr\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841857 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c008d9e-3c97-4fcc-a448-c3c34829b24a-tmp-dir\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841880 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841898 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-cni-multus\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841920 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-os-release\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.841937 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lsclv\" (UniqueName: \"kubernetes.io/projected/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-kube-api-access-lsclv\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842024 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842048 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-kubelet\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842072 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-systemd\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842098 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842194 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n476s\" (UniqueName: \"kubernetes.io/projected/2d82c231-80e9-4268-8ec7-1ae260abe06c-kube-api-access-n476s\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842256 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-slash\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842278 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-var-lib-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842303 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-etc-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842397 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba336c2-0d9e-485a-9785-761f97f2601a-ovn-node-metrics-cert\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842455 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-proxy-tls\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842479 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-cni-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842523 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-conf-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842547 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842542 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-system-cni-dir\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842604 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-cni-bin\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842617 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842655 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-rootfs\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842652 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-bin\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842705 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-log-socket\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842717 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842750 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-config\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842766 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-os-release\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842810 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-var-lib-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842851 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-host-var-lib-cni-multus\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842864 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-slash\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842879 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-etc-openvswitch\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842913 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-rootfs\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842652 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-systemd\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842929 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-os-release\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842970 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842971 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-kubelet\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842988 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-cnibin\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.843139 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-cnibin\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.843186 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-env-overrides\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.843271 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-log-socket\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.843622 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-cni-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844045 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ef15793-fa49-4c37-a355-d4573977e301-multus-conf-dir\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844210 5123 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844264 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844285 5123 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844296 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844306 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844331 5123 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844352 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844368 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844386 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844401 5123 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844415 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844429 5123 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844444 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844456 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844469 5123 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844502 5123 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844517 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844528 5123 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844537 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844546 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844557 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844572 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844607 5123 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844619 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844633 5123 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844645 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844658 5123 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844673 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844686 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844699 5123 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844712 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844725 5123 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844737 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844752 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844764 5123 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844791 5123 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844808 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844821 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844837 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844859 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844874 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844899 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844913 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844925 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844939 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844965 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.844987 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845000 5123 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845015 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845028 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845039 5123 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845051 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845064 5123 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845076 5123 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845087 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845099 5123 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845112 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845128 5123 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845145 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845158 5123 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845174 5123 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845188 5123 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845203 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845239 5123 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845254 5123 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845267 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845283 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845307 5123 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845320 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845333 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845346 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845359 5123 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.845373 5123 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.842155 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ef15793-fa49-4c37-a355-d4573977e301-multus-daemon-config\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.846736 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.848298 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3753a9bc-5b66-4cf6-b6a8-fab1c60998f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ef017f0eef1c51fa90d6de39f73c6270effb87f5deed367566d0ad9421d880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://13ec8c878aa1edd7f7ea3a6bb1a6895c7ad6b6675171bab7edf369eb5dd7a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ec8c878aa1edd7f7ea3a6bb1a6895c7ad6b6675171bab7edf369eb5dd7a266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.848567 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.851140 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-proxy-tls\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.855154 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba336c2-0d9e-485a-9785-761f97f2601a-ovn-node-metrics-cert\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.855788 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c008d9e-3c97-4fcc-a448-c3c34829b24a-tmp-dir\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.860051 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffd7\" (UniqueName: \"kubernetes.io/projected/7c008d9e-3c97-4fcc-a448-c3c34829b24a-kube-api-access-gffd7\") pod \"node-resolver-lvztx\" (UID: \"7c008d9e-3c97-4fcc-a448-c3c34829b24a\") " pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.862046 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ef15793-fa49-4c37-a355-d4573977e301-cni-binary-copy\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.862302 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f45e8e58-84aa-4f67-b397-495c8339ce58-serviceca\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.864459 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp998\" (UniqueName: \"kubernetes.io/projected/e6c3a697-51e4-44dd-a38c-3287db85ce50-kube-api-access-fp998\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.867705 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6nc7\" (UniqueName: \"kubernetes.io/projected/f45e8e58-84aa-4f67-b397-495c8339ce58-kube-api-access-l6nc7\") pod \"node-ca-rd8p2\" (UID: \"f45e8e58-84aa-4f67-b397-495c8339ce58\") " pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.867972 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"cb8695cf550fb36d6c55fc2c55a29fcfc54c94be4daf3f2c662defdefd6f8bfd"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.874683 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfltr\" (UniqueName: \"kubernetes.io/projected/4ba336c2-0d9e-485a-9785-761f97f2601a-kube-api-access-dfltr\") pod \"ovnkube-node-c7cpz\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.874768 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsclv\" (UniqueName: \"kubernetes.io/projected/5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea-kube-api-access-lsclv\") pod \"multus-additional-cni-plugins-z24lm\" (UID: \"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\") " pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.874913 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrl4z\" (UniqueName: \"kubernetes.io/projected/cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4-kube-api-access-jrl4z\") pod \"machine-config-daemon-cs4j6\" (UID: \"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.875175 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba336c2-0d9e-485a-9785-761f97f2601a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7cpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.879099 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n476s\" (UniqueName: \"kubernetes.io/projected/2d82c231-80e9-4268-8ec7-1ae260abe06c-kube-api-access-n476s\") pod \"ovnkube-control-plane-57b78d8988-kbx8c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.884384 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bb4w\" (UniqueName: \"kubernetes.io/projected/3ef15793-fa49-4c37-a355-d4573977e301-kube-api-access-2bb4w\") pod \"multus-27rm2\" (UID: \"3ef15793-fa49-4c37-a355-d4573977e301\") " pod="openshift-multus/multus-27rm2" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.888476 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.893599 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hmprz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c3a697-51e4-44dd-a38c-3287db85ce50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp998\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp998\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hmprz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.901249 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.925248 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6b0db9-19b7-497f-99ee-934d183ac310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85810fde5314851f472de729604032c4393454ab56c99f7d0c8f68db47a2ce2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://623d61420ea353df65a0492fd9ca49b279feb02a781281bd1668e1f04db68b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a34d581eecc9547b181c185a5046352353babf0efa3327710bceac6d88f2f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://84a6190600d909f264afa90eaf73f0475b5fd2c8cfd699f98afe86f0dad15b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://02831f45290db1a1d2fe96203679aee8039426e4470ec48bfcf087e7d34e454f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd0a88b9a42b5c1894a2293d709a598f4e23c1aacedf07ee3a9ece8074d29ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd0a88b9a42b5c1894a2293d709a598f4e23c1aacedf07ee3a9ece8074d29ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cca787d9fdc34bc1120be4a21fb6165c1108799e292f659ed9a30c36238056e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cca787d9fdc34bc1120be4a21fb6165c1108799e292f659ed9a30c36238056e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://194c064c4d0651f052c31b61cc496928c496f8605e7e9b9dbc7dfbc29498fe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194c064c4d0651f052c31b61cc496928c496f8605e7e9b9dbc7dfbc29498fe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.930683 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.930741 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.930756 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.930777 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.930789 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:00Z","lastTransitionTime":"2025-12-12T15:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.941414 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19e6d9d-1622-4a7a-a3f0-58b5f47dbf00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://73dfa58c2e8aff8d0309a8fb1e6d250887820cc494e9dea56b738621d5b92ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://90a36cde8f0155fd7e784fe62e8b6855d9e6067713b30d29b277dd7bc9506b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ab77a37e33bb3c89c48009a92c9ec8d9b3251462d2094bc09d36304905f2864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f211c54a19b58b5792cfc535e48c1fc788e339590c93985f56907a6ef3218bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.955772 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcb401b-30f5-47e5-ab2a-9f6a9731ad80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fec0b545071ff72387a726c65b878b3cc3c54114436f814409a530a7b30c28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://154fb3e3ce5de4d560b7ff2ad3ca84b8f7fa282b7af47effe0ee3b23ad996e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e5f5ca37a436009ecf8073fbd361e0a3bc762b5ecf0fb16faf92ba09c336922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81558377f7693d3a49de48f1988e688884e346bef47364c2844d0f9ad8fd5eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81558377f7693d3a49de48f1988e688884e346bef47364c2844d0f9ad8fd5eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.971659 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:00 crc kubenswrapper[5123]: I1212 15:21:00.988410 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rd8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45e8e58-84aa-4f67-b397-495c8339ce58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6nc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rd8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.010356 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z24lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.034129 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.034179 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.034190 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.034258 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.034281 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.039250 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-27rm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef15793-fa49-4c37-a355-d4573977e301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bb4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27rm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.058186 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-lvztx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c008d9e-3c97-4fcc-a448-c3c34829b24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gffd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lvztx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.105556 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.123283 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.135664 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rd8p2" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.137656 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.137710 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.137726 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.137746 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.137759 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.150293 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lvztx" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.162950 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z24lm" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.168364 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:21:01 crc kubenswrapper[5123]: W1212 15:21:01.173950 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-e0bb0ad7577b18f880b55e3ae7d2316934f25760c5257fc9030ac8d43d058067 WatchSource:0}: Error finding container e0bb0ad7577b18f880b55e3ae7d2316934f25760c5257fc9030ac8d43d058067: Status 404 returned error can't find the container with id e0bb0ad7577b18f880b55e3ae7d2316934f25760c5257fc9030ac8d43d058067 Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.176305 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-27rm2" Dec 12 15:21:01 crc kubenswrapper[5123]: W1212 15:21:01.203913 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa0afe2_e3d6_43e2_8e27_ea16e1f45dea.slice/crio-92189c5739fbd331cf0024559ae4b72c62588caca611fe7bf949b23337b15ccb WatchSource:0}: Error finding container 92189c5739fbd331cf0024559ae4b72c62588caca611fe7bf949b23337b15ccb: Status 404 returned error can't find the container with id 92189c5739fbd331cf0024559ae4b72c62588caca611fe7bf949b23337b15ccb Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.249664 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.249986 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:02.249937732 +0000 UTC m=+91.059890243 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.250280 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.250335 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.250467 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.250654 5123 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.250665 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.250804 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.250805 5123 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.250824 5123 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.250942 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:02.250758598 +0000 UTC m=+91.060711109 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.251054 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:02.251039397 +0000 UTC m=+91.060991908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.251164 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:02.25115027 +0000 UTC m=+91.061102771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.256523 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.256569 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.256585 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.256609 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.256625 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.352898 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.353299 5123 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.353498 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs podName:e6c3a697-51e4-44dd-a38c-3287db85ce50 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:02.353466256 +0000 UTC m=+91.163418767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs") pod "network-metrics-daemon-hmprz" (UID: "e6c3a697-51e4-44dd-a38c-3287db85ce50") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.353545 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.353593 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.353610 5123 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.353733 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:02.353689433 +0000 UTC m=+91.163641944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.353294 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.383349 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.383425 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.383440 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.383463 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.383477 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.416470 5123 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.490301 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.490372 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.490385 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.490408 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.490421 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.598593 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.598650 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.598666 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.598686 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.598705 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.672701 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:01 crc kubenswrapper[5123]: E1212 15:21:01.672898 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.769952 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3753a9bc-5b66-4cf6-b6a8-fab1c60998f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ef017f0eef1c51fa90d6de39f73c6270effb87f5deed367566d0ad9421d880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://13ec8c878aa1edd7f7ea3a6bb1a6895c7ad6b6675171bab7edf369eb5dd7a266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ec8c878aa1edd7f7ea3a6bb1a6895c7ad6b6675171bab7edf369eb5dd7a266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.797603 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.797685 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.797726 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.797753 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.797767 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.799183 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba336c2-0d9e-485a-9785-761f97f2601a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfltr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7cpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.801865 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.803335 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.915583 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.919057 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.919272 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.919372 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.919392 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.919417 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.919430 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:01Z","lastTransitionTime":"2025-12-12T15:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.925313 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.928827 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hmprz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c3a697-51e4-44dd-a38c-3287db85ce50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp998\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp998\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hmprz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.932885 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.935723 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.937846 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.939472 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.943579 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.948033 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.953838 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.955931 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.959020 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 12 15:21:01 crc kubenswrapper[5123]: I1212 15:21:01.960013 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.013594 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.015925 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.018499 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.020638 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.023350 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.024911 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.029600 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.037880 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.043231 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.044037 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6b0db9-19b7-497f-99ee-934d183ac310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://85810fde5314851f472de729604032c4393454ab56c99f7d0c8f68db47a2ce2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://623d61420ea353df65a0492fd9ca49b279feb02a781281bd1668e1f04db68b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a34d581eecc9547b181c185a5046352353babf0efa3327710bceac6d88f2f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://84a6190600d909f264afa90eaf73f0475b5fd2c8cfd699f98afe86f0dad15b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://02831f45290db1a1d2fe96203679aee8039426e4470ec48bfcf087e7d34e454f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd0a88b9a42b5c1894a2293d709a598f4e23c1aacedf07ee3a9ece8074d29ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd0a88b9a42b5c1894a2293d709a598f4e23c1aacedf07ee3a9ece8074d29ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cca787d9fdc34bc1120be4a21fb6165c1108799e292f659ed9a30c36238056e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cca787d9fdc34bc1120be4a21fb6165c1108799e292f659ed9a30c36238056e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://194c064c4d0651f052c31b61cc496928c496f8605e7e9b9dbc7dfbc29498fe94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194c064c4d0651f052c31b61cc496928c496f8605e7e9b9dbc7dfbc29498fe94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.044769 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.048188 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.051380 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.053480 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.056551 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.057241 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.107343 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.107890 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.107942 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.107953 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.107975 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.107988 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:02Z","lastTransitionTime":"2025-12-12T15:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.108985 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.115444 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.118909 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.121862 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.122986 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.125612 5123 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.125775 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.135105 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.140452 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.142306 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.144712 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.145382 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.147172 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.148481 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.152895 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.157473 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.159495 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.165107 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.167570 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.174448 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.176700 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.178274 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.180744 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.183814 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.217804 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b19e6d9d-1622-4a7a-a3f0-58b5f47dbf00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://73dfa58c2e8aff8d0309a8fb1e6d250887820cc494e9dea56b738621d5b92ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://90a36cde8f0155fd7e784fe62e8b6855d9e6067713b30d29b277dd7bc9506b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ab77a37e33bb3c89c48009a92c9ec8d9b3251462d2094bc09d36304905f2864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f211c54a19b58b5792cfc535e48c1fc788e339590c93985f56907a6ef3218bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.234887 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcb401b-30f5-47e5-ab2a-9f6a9731ad80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fec0b545071ff72387a726c65b878b3cc3c54114436f814409a530a7b30c28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://154fb3e3ce5de4d560b7ff2ad3ca84b8f7fa282b7af47effe0ee3b23ad996e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e5f5ca37a436009ecf8073fbd361e0a3bc762b5ecf0fb16faf92ba09c336922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d81558377f7693d3a49de48f1988e688884e346bef47364c2844d0f9ad8fd5eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81558377f7693d3a49de48f1988e688884e346bef47364c2844d0f9ad8fd5eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.248767 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="55fa7b3e014bc9c796e0cba7b0e5a3ec4c3cf5650a0149ba77bf1970705c94a6" exitCode=0 Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.252653 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.252710 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.252722 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.252739 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.252748 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:02Z","lastTransitionTime":"2025-12-12T15:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.252846 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.274641 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rd8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45e8e58-84aa-4f67-b397-495c8339ce58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6nc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rd8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.286401 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.287836 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.295096 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296461 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lvztx" event={"ID":"7c008d9e-3c97-4fcc-a448-c3c34829b24a","Type":"ContainerStarted","Data":"d483afec7e5ba34de88d6116e94808db803c11720ab6694a50bb8f8869dae078"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296535 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"e0bb0ad7577b18f880b55e3ae7d2316934f25760c5257fc9030ac8d43d058067"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296554 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27rm2" event={"ID":"3ef15793-fa49-4c37-a355-d4573977e301","Type":"ContainerStarted","Data":"2f36367d473f5a38edab39f6d69f8e609821e571ee209d999e5df8fa72880fb1"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296624 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerStarted","Data":"92189c5739fbd331cf0024559ae4b72c62588caca611fe7bf949b23337b15ccb"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296637 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"b2ff44492ed9c3a02cf2686dcfbd97b1fcd6e4be7ef34bb33414f572157415e8"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296692 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"65dc049b4db90d3b590a91a0ba963ce193c4d376d4171d75ddda499d4ad620ff"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296706 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"2bba7a45bf52fe789353e8e548a1b710bba5c5d9baf3c466ba62d90ab82b497c"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296717 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"0c670bfeb1912080c2549595a3c951cac3ddb9749d38217915bf8b63b3d1c428"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296730 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d4ff262499422129e1daa07d770edfc941a0248f2f2dacf1ae57d6eae630614e"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296741 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"086d66b9b2b739084309e8e0c740853309501976e7db579a0cc02b1f488309c9"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296753 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" event={"ID":"2d82c231-80e9-4268-8ec7-1ae260abe06c","Type":"ContainerStarted","Data":"150f9f76efd48d3e98dbe363eb13ee730ec2f286c53954bb5c7dfe4533c7ee72"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296766 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rd8p2" event={"ID":"f45e8e58-84aa-4f67-b397-495c8339ce58","Type":"ContainerStarted","Data":"936aa119f848a2590609e3b7f61d83f8038aa3f9fbd7bb6967eba626ebaa30b1"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296779 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"55fa7b3e014bc9c796e0cba7b0e5a3ec4c3cf5650a0149ba77bf1970705c94a6"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.296797 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"82ac0974ca189f76f1e155d7fbd7c6a6bf806727b3551f8a8457694ea6b14f51"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.301408 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsclv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z24lm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.319363 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-27rm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef15793-fa49-4c37-a355-d4573977e301\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bb4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27rm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.336621 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.336765 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.336813 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.336837 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337013 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337036 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337050 5123 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337117 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:04.337094429 +0000 UTC m=+93.147046930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337202 5123 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337206 5123 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337270 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:04.337259865 +0000 UTC m=+93.147212386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337286 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:04.337278625 +0000 UTC m=+93.147231136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.337322 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:04.337308476 +0000 UTC m=+93.147260987 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.373354 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-lvztx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c008d9e-3c97-4fcc-a448-c3c34829b24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gffd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lvztx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.391494 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.391551 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.391565 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.391587 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.391599 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:02Z","lastTransitionTime":"2025-12-12T15:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.393686 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96b4a286-31bb-42a1-934a-56ea0da8024a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:20:48Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 15:20:48.086740 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:20:48.086994 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 15:20:48.088436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1823639014/tls.crt::/tmp/serving-cert-1823639014/tls.key\\\\\\\"\\\\nI1212 15:20:48.580604 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:20:48.583707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:20:48.583755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:20:48.583803 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:20:48.583812 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:20:48.588321 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:20:48.588373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:20:48.588382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:20:48.588388 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:20:48.588392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:20:48.588396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:20:48.588400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 15:20:48.588333 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 15:20:48.591304 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:20:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.437721 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.439269 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.440599 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.440643 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.440665 5123 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.440749 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:04.440722457 +0000 UTC m=+93.250674968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.441924 5123 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.442018 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs podName:e6c3a697-51e4-44dd-a38c-3287db85ce50 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:04.442001756 +0000 UTC m=+93.251954257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs") pod "network-metrics-daemon-hmprz" (UID: "e6c3a697-51e4-44dd-a38c-3287db85ce50") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.457000 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.511600 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.513012 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.513065 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.513077 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.513094 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.513106 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:02Z","lastTransitionTime":"2025-12-12T15:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.531547 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.560793 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.574639 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.586722 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d82c231-80e9-4268-8ec7-1ae260abe06c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n476s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n476s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kbx8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.662025 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.662049 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.662422 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.662416 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.662578 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:02 crc kubenswrapper[5123]: E1212 15:21:02.662696 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.702137 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.703127 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.703148 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.703175 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.703189 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:02Z","lastTransitionTime":"2025-12-12T15:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.708447 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jrl4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jrl4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:21:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cs4j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.741928 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96b4a286-31bb-42a1-934a-56ea0da8024a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:20:48Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 15:20:48.086740 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:20:48.086994 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 15:20:48.088436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1823639014/tls.crt::/tmp/serving-cert-1823639014/tls.key\\\\\\\"\\\\nI1212 15:20:48.580604 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:20:48.583707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:20:48.583755 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:20:48.583803 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:20:48.583812 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:20:48.588321 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:20:48.588373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:20:48.588382 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:20:48.588388 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:20:48.588392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:20:48.588396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:20:48.588400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 15:20:48.588333 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 15:20:48.591304 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:20:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:19:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:19:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:19:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.757152 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.781837 5123 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:21:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.813158 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.813268 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.813286 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.813311 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.813327 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:02Z","lastTransitionTime":"2025-12-12T15:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.916716 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.916784 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.916797 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.916818 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:02 crc kubenswrapper[5123]: I1212 15:21:02.916832 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:02Z","lastTransitionTime":"2025-12-12T15:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.146473 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.146535 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.146547 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.146567 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.146725 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:03Z","lastTransitionTime":"2025-12-12T15:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.286315 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podStartSLOduration=59.286272407 podStartE2EDuration="59.286272407s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:03.093447384 +0000 UTC m=+91.903399905" watchObservedRunningTime="2025-12-12 15:21:03.286272407 +0000 UTC m=+92.096225018" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.400852 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.400915 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.400927 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.400949 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.400964 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:03Z","lastTransitionTime":"2025-12-12T15:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.427511 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" event={"ID":"2d82c231-80e9-4268-8ec7-1ae260abe06c","Type":"ContainerStarted","Data":"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.432619 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rd8p2" event={"ID":"f45e8e58-84aa-4f67-b397-495c8339ce58","Type":"ContainerStarted","Data":"32f0bd1f96d03b6c9596a7f1c720c293ab4077b85e4f9ecbbfbdaf3be35b4bc6"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.438941 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"9359cde708bbd01b68d54b173f27267dbc5df381ffb70ca8189f8f19b2fb3bbc"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.445058 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lvztx" event={"ID":"7c008d9e-3c97-4fcc-a448-c3c34829b24a","Type":"ContainerStarted","Data":"01ecc9f9681ec32ead3a226da364791857562c854072446d0474b6371a04971d"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.449608 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"fbb262e008bb63331fb19bc2442d781623cc0ee33005e8e5b29a42b0f3adb0f2"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.451516 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27rm2" event={"ID":"3ef15793-fa49-4c37-a355-d4573977e301","Type":"ContainerStarted","Data":"23d144a0239efa382b93533f38644c94c10ca4bc5ce0604670b37be72d669266"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.454455 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerStarted","Data":"8a5f29c1be40ed5d6f1e17b3afb10d8cabb5aca589e210435fc387933260bec3"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.466200 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.466172145 podStartE2EDuration="4.466172145s" podCreationTimestamp="2025-12-12 15:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:03.328029521 +0000 UTC m=+92.137982032" watchObservedRunningTime="2025-12-12 15:21:03.466172145 +0000 UTC m=+92.276124656" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.509744 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.509819 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.509832 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.509856 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.509868 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:03Z","lastTransitionTime":"2025-12-12T15:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.612831 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.612881 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.612894 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.612917 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.612931 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:03Z","lastTransitionTime":"2025-12-12T15:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.638881 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:03 crc kubenswrapper[5123]: E1212 15:21:03.639168 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.641361 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.641343957 podStartE2EDuration="4.641343957s" podCreationTimestamp="2025-12-12 15:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:03.639588182 +0000 UTC m=+92.449540723" watchObservedRunningTime="2025-12-12 15:21:03.641343957 +0000 UTC m=+92.451296468" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.800447 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.800531 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.800560 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.800589 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.800616 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:03Z","lastTransitionTime":"2025-12-12T15:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.830469 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=4.830439503 podStartE2EDuration="4.830439503s" podCreationTimestamp="2025-12-12 15:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:03.783492107 +0000 UTC m=+92.593444638" watchObservedRunningTime="2025-12-12 15:21:03.830439503 +0000 UTC m=+92.640392014" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.830630 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=3.830625779 podStartE2EDuration="3.830625779s" podCreationTimestamp="2025-12-12 15:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:03.829751482 +0000 UTC m=+92.639703993" watchObservedRunningTime="2025-12-12 15:21:03.830625779 +0000 UTC m=+92.640578290" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.904586 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.905255 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.905318 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.905361 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:03 crc kubenswrapper[5123]: I1212 15:21:03.905387 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:03Z","lastTransitionTime":"2025-12-12T15:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.051705 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rd8p2" podStartSLOduration=61.051640612 podStartE2EDuration="1m1.051640612s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:04.050992183 +0000 UTC m=+92.860944704" watchObservedRunningTime="2025-12-12 15:21:04.051640612 +0000 UTC m=+92.861593143" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.133564 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-27rm2" podStartSLOduration=60.13353297 podStartE2EDuration="1m0.13353297s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:04.133237111 +0000 UTC m=+92.943189622" watchObservedRunningTime="2025-12-12 15:21:04.13353297 +0000 UTC m=+92.943485481" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.179826 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lvztx" podStartSLOduration=61.179784856 podStartE2EDuration="1m1.179784856s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:04.178928939 +0000 UTC m=+92.988881460" watchObservedRunningTime="2025-12-12 15:21:04.179784856 +0000 UTC m=+92.989737367" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.397982 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.398164 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.398248 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.398307 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.398506 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.398528 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.398543 5123 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.398631 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:08.39860371 +0000 UTC m=+97.208556221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.399364 5123 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.399566 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:08.399524409 +0000 UTC m=+97.209476920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.399622 5123 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.399724 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:08.399696725 +0000 UTC m=+97.209649396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.399764 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:08.399752077 +0000 UTC m=+97.209704818 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.500006 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.500110 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.500469 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.500514 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.500540 5123 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.500467 5123 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.500677 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:08.500619607 +0000 UTC m=+97.310572118 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.500702 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs podName:e6c3a697-51e4-44dd-a38c-3287db85ce50 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:08.500692779 +0000 UTC m=+97.310645290 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs") pod "network-metrics-daemon-hmprz" (UID: "e6c3a697-51e4-44dd-a38c-3287db85ce50") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.527910 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.527988 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.528010 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.528040 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.528053 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:04Z","lastTransitionTime":"2025-12-12T15:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.555631 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" event={"ID":"2d82c231-80e9-4268-8ec7-1ae260abe06c","Type":"ContainerStarted","Data":"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700"} Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.559752 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"a9267d2d3e119629cfe5f4eb756093064b1a946d674358269b43bce2e3e9c4bb"} Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.610440 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.610526 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.610540 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.610563 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.610577 5123 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:21:04Z","lastTransitionTime":"2025-12-12T15:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.646127 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.646421 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.646514 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.646569 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.646624 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:04 crc kubenswrapper[5123]: E1212 15:21:04.646675 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.872179 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" podStartSLOduration=60.87213202 podStartE2EDuration="1m0.87213202s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:04.617316961 +0000 UTC m=+93.427269492" watchObservedRunningTime="2025-12-12 15:21:04.87213202 +0000 UTC m=+93.682084531" Dec 12 15:21:04 crc kubenswrapper[5123]: I1212 15:21:04.876486 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c"] Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.107955 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.114309 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.114399 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.114382 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.114446 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.143747 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.143837 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.143904 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.144023 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.144096 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.245979 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.246097 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.246135 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.246162 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.246196 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.246261 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.246353 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.247826 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.269420 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.269914 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b19ccb-68d5-48be-b88d-9db2aa1cd995-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-4bg4c\" (UID: \"b1b19ccb-68d5-48be-b88d-9db2aa1cd995\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.554850 5123 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.555115 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.572562 5123 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.580838 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"002ec6cbd941ba0a26b390b7c87f1fcca86b58149647a279144bdf9a48aba978"} Dec 12 15:21:05 crc kubenswrapper[5123]: I1212 15:21:05.642711 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:05 crc kubenswrapper[5123]: E1212 15:21:05.643809 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:06 crc kubenswrapper[5123]: I1212 15:21:06.614908 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerDied","Data":"8a5f29c1be40ed5d6f1e17b3afb10d8cabb5aca589e210435fc387933260bec3"} Dec 12 15:21:06 crc kubenswrapper[5123]: I1212 15:21:06.614833 5123 generic.go:358] "Generic (PLEG): container finished" podID="5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea" containerID="8a5f29c1be40ed5d6f1e17b3afb10d8cabb5aca589e210435fc387933260bec3" exitCode=0 Dec 12 15:21:06 crc kubenswrapper[5123]: I1212 15:21:06.619018 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" event={"ID":"b1b19ccb-68d5-48be-b88d-9db2aa1cd995","Type":"ContainerStarted","Data":"d731a7cd50dce16376ccbd83c8068eec97764b97b473f617c46548f0b197d06c"} Dec 12 15:21:06 crc kubenswrapper[5123]: I1212 15:21:06.627733 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"d44176673eaef06ae636c84b82c9cab9190707d7e960c7579e3c7f42c8738910"} Dec 12 15:21:06 crc kubenswrapper[5123]: I1212 15:21:06.639050 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:06 crc kubenswrapper[5123]: I1212 15:21:06.639116 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:06 crc kubenswrapper[5123]: E1212 15:21:06.639242 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:06 crc kubenswrapper[5123]: E1212 15:21:06.639361 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:06 crc kubenswrapper[5123]: I1212 15:21:06.639456 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:06 crc kubenswrapper[5123]: E1212 15:21:06.639506 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:07 crc kubenswrapper[5123]: I1212 15:21:07.639155 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:07 crc kubenswrapper[5123]: E1212 15:21:07.639436 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:07 crc kubenswrapper[5123]: I1212 15:21:07.690455 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" event={"ID":"b1b19ccb-68d5-48be-b88d-9db2aa1cd995","Type":"ContainerStarted","Data":"03b4e20af6412c16708d1af3f7bc92bad0d9af7cff8d5c85ddca4f8a0ec4be75"} Dec 12 15:21:07 crc kubenswrapper[5123]: I1212 15:21:07.690874 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"13ba6a096a2ba4b0b9afbd50d11eba0d8cdb25e23d1b4b26e18c3201ccf516db"} Dec 12 15:21:07 crc kubenswrapper[5123]: I1212 15:21:07.690894 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"142c04aec6ca17e608747ac86af8a88d24797e0d10c03531d9e48b83cfb55471"} Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.451037 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.451168 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.451219 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.451298 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451373 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:16.451331426 +0000 UTC m=+105.261283937 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451417 5123 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451500 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451528 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451544 5123 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451445 5123 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451573 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:16.451541613 +0000 UTC m=+105.261494154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451632 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:16.451596244 +0000 UTC m=+105.261548815 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.451667 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:16.451659806 +0000 UTC m=+105.261612317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.644973 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.645096 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.645155 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.645175 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.645096 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645248 5123 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645347 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645389 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs podName:e6c3a697-51e4-44dd-a38c-3287db85ce50 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:16.645351575 +0000 UTC m=+105.455304086 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs") pod "network-metrics-daemon-hmprz" (UID: "e6c3a697-51e4-44dd-a38c-3287db85ce50") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645341 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645398 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645426 5123 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645426 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645478 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:16.645466889 +0000 UTC m=+105.455419400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:08 crc kubenswrapper[5123]: E1212 15:21:08.645502 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.665100 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerStarted","Data":"9a8f1e40e67cf00499c5f38d83e1aa9ffb8962be9e26b99e24d6fbd638e844fd"} Dec 12 15:21:08 crc kubenswrapper[5123]: I1212 15:21:08.697728 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-4bg4c" podStartSLOduration=64.697687391 podStartE2EDuration="1m4.697687391s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:07.704908681 +0000 UTC m=+96.514861222" watchObservedRunningTime="2025-12-12 15:21:08.697687391 +0000 UTC m=+97.507639892" Dec 12 15:21:09 crc kubenswrapper[5123]: I1212 15:21:09.638794 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:09 crc kubenswrapper[5123]: E1212 15:21:09.639160 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:10 crc kubenswrapper[5123]: I1212 15:21:10.638744 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:10 crc kubenswrapper[5123]: E1212 15:21:10.639298 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:10 crc kubenswrapper[5123]: I1212 15:21:10.638822 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:10 crc kubenswrapper[5123]: E1212 15:21:10.639410 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:10 crc kubenswrapper[5123]: I1212 15:21:10.638739 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:10 crc kubenswrapper[5123]: E1212 15:21:10.639487 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:11 crc kubenswrapper[5123]: I1212 15:21:11.640682 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:11 crc kubenswrapper[5123]: E1212 15:21:11.640867 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:11 crc kubenswrapper[5123]: I1212 15:21:11.759425 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"4df8f61665a45afd71d2b5f4b119db8cb83b99a47388b68baf7e27ed2c4f2c9f"} Dec 12 15:21:12 crc kubenswrapper[5123]: I1212 15:21:12.639479 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:12 crc kubenswrapper[5123]: I1212 15:21:12.639564 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:12 crc kubenswrapper[5123]: I1212 15:21:12.639489 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:12 crc kubenswrapper[5123]: E1212 15:21:12.639710 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:12 crc kubenswrapper[5123]: E1212 15:21:12.639806 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:12 crc kubenswrapper[5123]: E1212 15:21:12.639900 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:13 crc kubenswrapper[5123]: I1212 15:21:13.640108 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:13 crc kubenswrapper[5123]: E1212 15:21:13.640683 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:13 crc kubenswrapper[5123]: I1212 15:21:13.700593 5123 generic.go:358] "Generic (PLEG): container finished" podID="5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea" containerID="9a8f1e40e67cf00499c5f38d83e1aa9ffb8962be9e26b99e24d6fbd638e844fd" exitCode=0 Dec 12 15:21:13 crc kubenswrapper[5123]: I1212 15:21:13.700811 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerDied","Data":"9a8f1e40e67cf00499c5f38d83e1aa9ffb8962be9e26b99e24d6fbd638e844fd"} Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.639045 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.639120 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.639300 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:14 crc kubenswrapper[5123]: E1212 15:21:14.639302 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:14 crc kubenswrapper[5123]: E1212 15:21:14.639646 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:14 crc kubenswrapper[5123]: E1212 15:21:14.639833 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.640278 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:21:14 crc kubenswrapper[5123]: E1212 15:21:14.640607 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.708107 5123 generic.go:358] "Generic (PLEG): container finished" podID="5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea" containerID="e920da66f516822954cc54897b41dba19a67973c017dc399816f5173801c29bb" exitCode=0 Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.708212 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerDied","Data":"e920da66f516822954cc54897b41dba19a67973c017dc399816f5173801c29bb"} Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.713542 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerStarted","Data":"75a1894691d0a31baf40f0164ded851c4ee47384a27e045ba24aa78b7377848f"} Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.898400 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.898474 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:14 crc kubenswrapper[5123]: I1212 15:21:14.942338 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podStartSLOduration=70.942310789 podStartE2EDuration="1m10.942310789s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:14.941930077 +0000 UTC m=+103.751882608" watchObservedRunningTime="2025-12-12 15:21:14.942310789 +0000 UTC m=+103.752263300" Dec 12 15:21:15 crc kubenswrapper[5123]: I1212 15:21:15.023780 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:15 crc kubenswrapper[5123]: I1212 15:21:15.638961 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:15 crc kubenswrapper[5123]: E1212 15:21:15.639253 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:15 crc kubenswrapper[5123]: I1212 15:21:15.722583 5123 generic.go:358] "Generic (PLEG): container finished" podID="5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea" containerID="90b085a3c5faaf8e05cd4227a07b9d36ce8bf569a75bad7088f89924e4010853" exitCode=0 Dec 12 15:21:15 crc kubenswrapper[5123]: I1212 15:21:15.722644 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerDied","Data":"90b085a3c5faaf8e05cd4227a07b9d36ce8bf569a75bad7088f89924e4010853"} Dec 12 15:21:15 crc kubenswrapper[5123]: I1212 15:21:15.723925 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:15 crc kubenswrapper[5123]: I1212 15:21:15.761328 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.469047 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.469257 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.469328 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.469359 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.469561 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.469594 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.469614 5123 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.469718 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.469685297 +0000 UTC m=+121.279637808 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.470352 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.470336437 +0000 UTC m=+121.280288958 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.470434 5123 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.470471 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.470463061 +0000 UTC m=+121.280415572 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.470538 5123 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.470573 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.470562284 +0000 UTC m=+121.280514795 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.797255 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.797528 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.797595 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.797662 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798023 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798055 5123 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798071 5123 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798148 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.798117149 +0000 UTC m=+121.608069660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798252 5123 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798288 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs podName:e6c3a697-51e4-44dd-a38c-3287db85ce50 nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.798277724 +0000 UTC m=+121.608230235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs") pod "network-metrics-daemon-hmprz" (UID: "e6c3a697-51e4-44dd-a38c-3287db85ce50") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.798462 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798572 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:16 crc kubenswrapper[5123]: I1212 15:21:16.798594 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:16 crc kubenswrapper[5123]: E1212 15:21:16.798682 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:17 crc kubenswrapper[5123]: I1212 15:21:17.639400 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:17 crc kubenswrapper[5123]: E1212 15:21:17.639556 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:17 crc kubenswrapper[5123]: I1212 15:21:17.807319 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerStarted","Data":"50aa5337451854062ea125999d5de1dd901889864b0bb940cc1ad601209c78b6"} Dec 12 15:21:17 crc kubenswrapper[5123]: I1212 15:21:17.809067 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"0b99f64a4aba509c6b331de20513521443d99e7f4717d7116fcfc90081de8360"} Dec 12 15:21:18 crc kubenswrapper[5123]: I1212 15:21:18.639325 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:18 crc kubenswrapper[5123]: I1212 15:21:18.639337 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:18 crc kubenswrapper[5123]: E1212 15:21:18.639479 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:18 crc kubenswrapper[5123]: I1212 15:21:18.639348 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:18 crc kubenswrapper[5123]: E1212 15:21:18.639669 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:18 crc kubenswrapper[5123]: E1212 15:21:18.639770 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:19 crc kubenswrapper[5123]: I1212 15:21:19.648322 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:19 crc kubenswrapper[5123]: E1212 15:21:19.648478 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:19 crc kubenswrapper[5123]: I1212 15:21:19.827582 5123 generic.go:358] "Generic (PLEG): container finished" podID="5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea" containerID="50aa5337451854062ea125999d5de1dd901889864b0bb940cc1ad601209c78b6" exitCode=0 Dec 12 15:21:19 crc kubenswrapper[5123]: I1212 15:21:19.827687 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerDied","Data":"50aa5337451854062ea125999d5de1dd901889864b0bb940cc1ad601209c78b6"} Dec 12 15:21:20 crc kubenswrapper[5123]: I1212 15:21:20.639166 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:20 crc kubenswrapper[5123]: E1212 15:21:20.639413 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:20 crc kubenswrapper[5123]: I1212 15:21:20.639163 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:20 crc kubenswrapper[5123]: E1212 15:21:20.639519 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:20 crc kubenswrapper[5123]: I1212 15:21:20.639166 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:20 crc kubenswrapper[5123]: E1212 15:21:20.639577 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:20 crc kubenswrapper[5123]: I1212 15:21:20.891382 5123 generic.go:358] "Generic (PLEG): container finished" podID="5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea" containerID="1a6946a2335589b5bc22b710890b167591ba6c68b7323c760379b6cf3af0d00e" exitCode=0 Dec 12 15:21:20 crc kubenswrapper[5123]: I1212 15:21:20.891584 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerDied","Data":"1a6946a2335589b5bc22b710890b167591ba6c68b7323c760379b6cf3af0d00e"} Dec 12 15:21:21 crc kubenswrapper[5123]: I1212 15:21:21.140145 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hmprz"] Dec 12 15:21:21 crc kubenswrapper[5123]: I1212 15:21:21.140773 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:21 crc kubenswrapper[5123]: E1212 15:21:21.140916 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:21 crc kubenswrapper[5123]: I1212 15:21:21.644464 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:21 crc kubenswrapper[5123]: E1212 15:21:21.644715 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:21 crc kubenswrapper[5123]: I1212 15:21:21.902261 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z24lm" event={"ID":"5fa0afe2-e3d6-43e2-8e27-ea16e1f45dea","Type":"ContainerStarted","Data":"04144d68224e9ee8fef8ac310fa389b9ef19a9012e9c72e99766f534e2343585"} Dec 12 15:21:21 crc kubenswrapper[5123]: I1212 15:21:21.935546 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z24lm" podStartSLOduration=77.935502211 podStartE2EDuration="1m17.935502211s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:21.934796908 +0000 UTC m=+110.744749429" watchObservedRunningTime="2025-12-12 15:21:21.935502211 +0000 UTC m=+110.745454742" Dec 12 15:21:22 crc kubenswrapper[5123]: I1212 15:21:22.638918 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:22 crc kubenswrapper[5123]: E1212 15:21:22.639208 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:22 crc kubenswrapper[5123]: I1212 15:21:22.639399 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:22 crc kubenswrapper[5123]: E1212 15:21:22.639494 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:22 crc kubenswrapper[5123]: I1212 15:21:22.639596 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:22 crc kubenswrapper[5123]: E1212 15:21:22.639696 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:23 crc kubenswrapper[5123]: I1212 15:21:23.658177 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:23 crc kubenswrapper[5123]: I1212 15:21:23.658177 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:23 crc kubenswrapper[5123]: E1212 15:21:23.658566 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:23 crc kubenswrapper[5123]: E1212 15:21:23.658630 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:24 crc kubenswrapper[5123]: I1212 15:21:24.639160 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:24 crc kubenswrapper[5123]: I1212 15:21:24.639258 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:24 crc kubenswrapper[5123]: E1212 15:21:24.639873 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:24 crc kubenswrapper[5123]: E1212 15:21:24.639694 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:25 crc kubenswrapper[5123]: I1212 15:21:25.639452 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:25 crc kubenswrapper[5123]: I1212 15:21:25.639882 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:25 crc kubenswrapper[5123]: E1212 15:21:25.640098 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hmprz" podUID="e6c3a697-51e4-44dd-a38c-3287db85ce50" Dec 12 15:21:25 crc kubenswrapper[5123]: E1212 15:21:25.640323 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 15:21:25 crc kubenswrapper[5123]: I1212 15:21:25.640634 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:21:25 crc kubenswrapper[5123]: E1212 15:21:25.640940 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 15:21:26 crc kubenswrapper[5123]: I1212 15:21:26.638921 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:26 crc kubenswrapper[5123]: I1212 15:21:26.638986 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:26 crc kubenswrapper[5123]: E1212 15:21:26.639116 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 15:21:26 crc kubenswrapper[5123]: E1212 15:21:26.639230 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 15:21:26 crc kubenswrapper[5123]: I1212 15:21:26.862113 5123 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 12 15:21:26 crc kubenswrapper[5123]: I1212 15:21:26.862390 5123 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 12 15:21:26 crc kubenswrapper[5123]: I1212 15:21:26.908692 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.718788 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.718846 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.719021 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.724834 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.725006 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.725122 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.725257 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.725273 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.725407 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.726605 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-t4m4d"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.728615 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.728638 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.728689 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.730114 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-96rdx"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.730278 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.735453 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-9j9pt"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.735692 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.739980 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-xhd9t"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.740692 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.744877 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.745110 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.744665 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7pgks"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.746778 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.746781 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.753487 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.753954 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.754200 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.754498 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.754904 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.755841 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.756025 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.762671 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.764610 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-68259"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.767858 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.768118 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.768732 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.768792 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.769060 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.769403 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.769862 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.770267 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.770788 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.771454 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.772106 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-cqp44"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.778547 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kvxss"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.778689 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.779099 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.779386 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.779601 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.779628 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.779797 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.780003 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.780246 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.780504 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.781731 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.782196 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.782466 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.786297 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.786575 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.786847 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.788511 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-vqqzf"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.788671 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.788939 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.789406 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.789921 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.790125 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.790342 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.790542 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.790575 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.790707 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.793483 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.793513 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.795165 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796077 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796181 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796330 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796463 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796349 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796236 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796856 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796940 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.796946 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.797153 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.797280 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.797533 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.797897 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.798090 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.800791 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.801177 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.807424 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.808265 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.808635 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.808750 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.809323 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.809714 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.814066 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.814530 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.815911 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.816149 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.816435 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.816638 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.818523 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.821428 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.827356 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9368bb85-0c25-4d7d-884c-7ebea4cf3336-config\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.827430 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htnd\" (UniqueName: \"kubernetes.io/projected/c4465de2-5e85-451d-a998-dcff71c6d37c-kube-api-access-6htnd\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.827495 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.827545 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-config\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.827597 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.827645 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.827701 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nnl\" (UniqueName: \"kubernetes.io/projected/9368bb85-0c25-4d7d-884c-7ebea4cf3336-kube-api-access-79nnl\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829324 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9368bb85-0c25-4d7d-884c-7ebea4cf3336-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829370 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4w7x\" (UniqueName: \"kubernetes.io/projected/1c109e0c-2708-45cf-8c8e-0489b41c9830-kube-api-access-w4w7x\") pod \"cluster-samples-operator-6b564684c8-dvgzb\" (UID: \"1c109e0c-2708-45cf-8c8e-0489b41c9830\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829401 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dm2w\" (UniqueName: \"kubernetes.io/projected/09107a60-87da-4e17-9cc0-6dce06396ab6-kube-api-access-2dm2w\") pod \"downloads-747b44746d-xhd9t\" (UID: \"09107a60-87da-4e17-9cc0-6dce06396ab6\") " pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829637 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-policies\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829679 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829847 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1681a2f-153f-44c0-901e-e85b401d30ee-config-volume\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829911 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1681a2f-153f-44c0-901e-e85b401d30ee-secret-volume\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.829936 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c109e0c-2708-45cf-8c8e-0489b41c9830-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-dvgzb\" (UID: \"1c109e0c-2708-45cf-8c8e-0489b41c9830\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830371 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnwb\" (UniqueName: \"kubernetes.io/projected/c1681a2f-153f-44c0-901e-e85b401d30ee-kube-api-access-rrnwb\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830425 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-dir\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830455 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830501 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830547 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830590 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9368bb85-0c25-4d7d-884c-7ebea4cf3336-images\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830638 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830680 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830739 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.830771 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.833109 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.833395 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.833700 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.834033 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.834263 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.868246 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.868961 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.868967 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.871670 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.871873 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.874745 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.875382 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.875621 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ts2mt"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.875943 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.879834 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.880401 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.880741 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.882594 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.882845 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.882966 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.883024 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.883045 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.883538 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.883935 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-cnq9c"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.884148 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.886528 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.886928 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.886993 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.887079 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.887479 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.887593 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-bmckw"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.887662 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.887938 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.893160 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.893434 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.894640 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.896449 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.896557 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.896615 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.898744 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.898817 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.902843 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.902857 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.903172 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.907898 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.982753 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.983563 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7e490e0b-11da-4093-bd3c-a328ebd6e304-tmp-dir\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.983656 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.983686 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-config\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.983706 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-config\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.983738 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.983794 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.984301 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.983843 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-79nnl\" (UniqueName: \"kubernetes.io/projected/9368bb85-0c25-4d7d-884c-7ebea4cf3336-kube-api-access-79nnl\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.984471 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48s86\" (UniqueName: \"kubernetes.io/projected/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-kube-api-access-48s86\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.984493 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-serving-cert\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.984510 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-trusted-ca-bundle\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.984539 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9368bb85-0c25-4d7d-884c-7ebea4cf3336-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.984556 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-service-ca\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.986819 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4"] Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.987547 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.987692 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.987917 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.988253 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.988705 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.984596 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-tmp-dir\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.997902 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-config\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.997938 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-client\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.997974 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4w7x\" (UniqueName: \"kubernetes.io/projected/1c109e0c-2708-45cf-8c8e-0489b41c9830-kube-api-access-w4w7x\") pod \"cluster-samples-operator-6b564684c8-dvgzb\" (UID: \"1c109e0c-2708-45cf-8c8e-0489b41c9830\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998000 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-ca\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998028 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9da0a55f-2526-45cc-b820-1b31ce63745c-config\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998058 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5357a0b5-86ce-437b-b973-0bc2be3f85fd-config\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998115 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2dm2w\" (UniqueName: \"kubernetes.io/projected/09107a60-87da-4e17-9cc0-6dce06396ab6-kube-api-access-2dm2w\") pod \"downloads-747b44746d-xhd9t\" (UID: \"09107a60-87da-4e17-9cc0-6dce06396ab6\") " pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998170 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf62556f-373c-41a0-96d4-8f431d629029-tmp\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998320 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgj9w\" (UniqueName: \"kubernetes.io/projected/9da0a55f-2526-45cc-b820-1b31ce63745c-kube-api-access-hgj9w\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998378 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-serving-cert\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998446 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-policies\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998470 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998484 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rql7\" (UniqueName: \"kubernetes.io/projected/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-kube-api-access-9rql7\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.998953 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.999002 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-available-featuregates\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.999057 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1681a2f-153f-44c0-901e-e85b401d30ee-config-volume\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.999260 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-serving-cert\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:27 crc kubenswrapper[5123]: I1212 15:21:27.999325 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1681a2f-153f-44c0-901e-e85b401d30ee-secret-volume\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999605 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c109e0c-2708-45cf-8c8e-0489b41c9830-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-dvgzb\" (UID: \"1c109e0c-2708-45cf-8c8e-0489b41c9830\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999645 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e490e0b-11da-4093-bd3c-a328ebd6e304-metrics-tls\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999677 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pxkl\" (UniqueName: \"kubernetes.io/projected/7e490e0b-11da-4093-bd3c-a328ebd6e304-kube-api-access-4pxkl\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999838 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrnwb\" (UniqueName: \"kubernetes.io/projected/c1681a2f-153f-44c0-901e-e85b401d30ee-kube-api-access-rrnwb\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999879 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-client-ca\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999910 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-dir\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999936 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999962 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:27.999989 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000147 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9368bb85-0c25-4d7d-884c-7ebea4cf3336-images\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000182 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-oauth-serving-cert\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000232 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj7bj\" (UniqueName: \"kubernetes.io/projected/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-kube-api-access-lj7bj\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000295 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000323 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000344 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nx5j\" (UniqueName: \"kubernetes.io/projected/bf62556f-373c-41a0-96d4-8f431d629029-kube-api-access-8nx5j\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000392 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5357a0b5-86ce-437b-b973-0bc2be3f85fd-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000429 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000448 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf62556f-373c-41a0-96d4-8f431d629029-serving-cert\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000637 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndtv\" (UniqueName: \"kubernetes.io/projected/5357a0b5-86ce-437b-b973-0bc2be3f85fd-kube-api-access-gndtv\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000682 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000708 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000819 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000836 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9da0a55f-2526-45cc-b820-1b31ce63745c-trusted-ca\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000864 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9368bb85-0c25-4d7d-884c-7ebea4cf3336-config\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.000983 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da0a55f-2526-45cc-b820-1b31ce63745c-serving-cert\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.001014 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-oauth-config\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.001057 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6htnd\" (UniqueName: \"kubernetes.io/projected/c4465de2-5e85-451d-a998-dcff71c6d37c-kube-api-access-6htnd\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.003773 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.004544 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-config\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.005042 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.005753 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.005864 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1681a2f-153f-44c0-901e-e85b401d30ee-config-volume\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.006283 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.006801 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9368bb85-0c25-4d7d-884c-7ebea4cf3336-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.009449 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9368bb85-0c25-4d7d-884c-7ebea4cf3336-config\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.010396 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-dir\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.011323 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-policies\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.011680 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.011989 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9368bb85-0c25-4d7d-884c-7ebea4cf3336-images\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.013521 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.015088 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1681a2f-153f-44c0-901e-e85b401d30ee-secret-volume\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.015852 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.016193 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.016924 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.017567 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c109e0c-2708-45cf-8c8e-0489b41c9830-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-dvgzb\" (UID: \"1c109e0c-2708-45cf-8c8e-0489b41c9830\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.025544 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.042753 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.048890 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.048930 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.051281 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.053422 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.055297 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.060236 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.060610 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.061446 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.063913 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.065612 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.071172 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.073686 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.076022 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.077572 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.079978 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rkcvb"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.083202 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.088854 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.281548 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.281997 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.282693 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gg4kh"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.282789 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.282903 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.283140 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.283761 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.284044 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.284091 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.284337 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.284437 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285078 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285729 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-auth-proxy-config\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285770 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-node-pullsecrets\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285805 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285838 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-oauth-serving-cert\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285863 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lj7bj\" (UniqueName: \"kubernetes.io/projected/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-kube-api-access-lj7bj\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285884 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nx5j\" (UniqueName: \"kubernetes.io/projected/bf62556f-373c-41a0-96d4-8f431d629029-kube-api-access-8nx5j\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285901 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5357a0b5-86ce-437b-b973-0bc2be3f85fd-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285928 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf62556f-373c-41a0-96d4-8f431d629029-serving-cert\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285943 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gndtv\" (UniqueName: \"kubernetes.io/projected/5357a0b5-86ce-437b-b973-0bc2be3f85fd-kube-api-access-gndtv\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285964 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.285984 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286007 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9da0a55f-2526-45cc-b820-1b31ce63745c-trusted-ca\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286033 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da0a55f-2526-45cc-b820-1b31ce63745c-serving-cert\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286055 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-oauth-config\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286074 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286101 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7e490e0b-11da-4093-bd3c-a328ebd6e304-tmp-dir\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286117 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-config\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286141 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-config\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286168 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-48s86\" (UniqueName: \"kubernetes.io/projected/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-kube-api-access-48s86\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286184 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-serving-cert\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286205 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8zb\" (UniqueName: \"kubernetes.io/projected/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-kube-api-access-dr8zb\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286242 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-etcd-client\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286262 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-trusted-ca-bundle\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286278 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-config\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286297 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-config\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286315 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f45ad41-b75a-4549-a242-88e737cb7698-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286335 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-machine-approver-tls\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286361 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-service-ca\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286377 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-tmp-dir\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286394 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r5sc\" (UniqueName: \"kubernetes.io/projected/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-kube-api-access-4r5sc\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286419 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-audit-dir\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286438 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-config\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286456 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-client\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286475 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-ca\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286492 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9da0a55f-2526-45cc-b820-1b31ce63745c-config\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286511 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5357a0b5-86ce-437b-b973-0bc2be3f85fd-config\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286527 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-audit\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286550 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf62556f-373c-41a0-96d4-8f431d629029-tmp\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286567 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-serving-cert\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286596 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgj9w\" (UniqueName: \"kubernetes.io/projected/9da0a55f-2526-45cc-b820-1b31ce63745c-kube-api-access-hgj9w\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286613 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-serving-cert\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286653 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286672 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpvp\" (UniqueName: \"kubernetes.io/projected/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-kube-api-access-6bpvp\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286702 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9rql7\" (UniqueName: \"kubernetes.io/projected/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-kube-api-access-9rql7\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286718 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286736 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-encryption-config\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286758 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-available-featuregates\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286776 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqjb\" (UniqueName: \"kubernetes.io/projected/9f45ad41-b75a-4549-a242-88e737cb7698-kube-api-access-jmqjb\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286815 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-serving-cert\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286853 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e490e0b-11da-4093-bd3c-a328ebd6e304-metrics-tls\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286933 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pxkl\" (UniqueName: \"kubernetes.io/projected/7e490e0b-11da-4093-bd3c-a328ebd6e304-kube-api-access-4pxkl\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286951 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-image-import-ca\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286967 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f45ad41-b75a-4549-a242-88e737cb7698-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.286997 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-client-ca\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.287106 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-oauth-serving-cert\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.287978 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-client-ca\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.288451 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9da0a55f-2526-45cc-b820-1b31ce63745c-trusted-ca\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.291130 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.292146 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da0a55f-2526-45cc-b820-1b31ce63745c-serving-cert\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.292273 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.292561 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-available-featuregates\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.292632 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-ca\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.293365 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.293554 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-config\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.294266 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf62556f-373c-41a0-96d4-8f431d629029-tmp\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.294292 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5357a0b5-86ce-437b-b973-0bc2be3f85fd-config\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.294631 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-trusted-ca-bundle\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.294671 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-service-ca\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.294738 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-tmp-dir\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.295561 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-config\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.296013 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7e490e0b-11da-4093-bd3c-a328ebd6e304-tmp-dir\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.296602 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9da0a55f-2526-45cc-b820-1b31ce63745c-config\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.296897 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-serving-cert\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.297057 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e490e0b-11da-4093-bd3c-a328ebd6e304-metrics-tls\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.297120 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5357a0b5-86ce-437b-b973-0bc2be3f85fd-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.297262 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf62556f-373c-41a0-96d4-8f431d629029-serving-cert\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.297296 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-serving-cert\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.298208 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-etcd-client\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.298928 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-console-oauth-config\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.299974 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-serving-cert\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.302019 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.304437 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.304630 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.308812 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.310598 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.312113 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-qpxdh"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.312427 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.315208 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.315439 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.320430 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mdpg8"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.320658 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.321729 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.325282 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7pgks"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.325320 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hkjk6"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.325513 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.328846 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-68259"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.328903 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.328922 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.328946 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pxgwd"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.328972 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.332591 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9nc4"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.332767 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.344157 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347444 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347488 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347503 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-t4m4d"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347518 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-96rdx"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347530 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-vqqzf"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347547 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-xhd9t"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347569 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347591 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347608 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347625 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9nc4"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347639 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347655 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347668 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347683 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-cqp44"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347697 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347713 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347730 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jd7j9"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.347778 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.354812 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.354878 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.354997 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kvxss"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355021 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355034 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-qpxdh"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355045 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355056 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-9j9pt"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355067 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355077 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355088 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jd7j9"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355098 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-bmckw"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355113 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rkcvb"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355130 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ts2mt"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355142 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355152 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hkjk6"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355148 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355164 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355338 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355350 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gg4kh"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.355363 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v"] Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.362007 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.381383 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387713 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8zb\" (UniqueName: \"kubernetes.io/projected/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-kube-api-access-dr8zb\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387750 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-etcd-client\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387774 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-config\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387798 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-config\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387825 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f45ad41-b75a-4549-a242-88e737cb7698-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387848 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-machine-approver-tls\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387904 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4r5sc\" (UniqueName: \"kubernetes.io/projected/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-kube-api-access-4r5sc\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387927 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-audit-dir\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387950 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-audit\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.387973 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-serving-cert\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388020 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388041 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bpvp\" (UniqueName: \"kubernetes.io/projected/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-kube-api-access-6bpvp\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388067 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388087 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-encryption-config\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388459 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqjb\" (UniqueName: \"kubernetes.io/projected/9f45ad41-b75a-4549-a242-88e737cb7698-kube-api-access-jmqjb\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388471 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-config\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388521 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-image-import-ca\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388556 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f45ad41-b75a-4549-a242-88e737cb7698-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388633 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-auth-proxy-config\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388653 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-node-pullsecrets\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388674 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388721 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.388759 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-config\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.389032 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-config\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.389455 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-node-pullsecrets\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.389690 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-audit-dir\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.390278 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.390580 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.390984 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-audit\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.391046 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-image-import-ca\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.391383 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-config\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.391738 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-auth-proxy-config\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.391855 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.393515 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-encryption-config\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.393736 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-serving-cert\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.394401 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-machine-approver-tls\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.395289 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.396144 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-etcd-client\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.402661 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.424462 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.442576 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.461851 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.482366 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.502384 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.522332 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.542079 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.563251 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.582714 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.602041 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.623154 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.638891 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.639282 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.642195 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.661860 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.682740 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.702693 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.722496 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.742176 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.801751 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.803699 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.808360 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nnl\" (UniqueName: \"kubernetes.io/projected/9368bb85-0c25-4d7d-884c-7ebea4cf3336-kube-api-access-79nnl\") pod \"machine-api-operator-755bb95488-68259\" (UID: \"9368bb85-0c25-4d7d-884c-7ebea4cf3336\") " pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.817976 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f45ad41-b75a-4549-a242-88e737cb7698-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.825268 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.832571 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f45ad41-b75a-4549-a242-88e737cb7698-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.883873 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-68259" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.980389 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.981028 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.981215 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 15:21:28 crc kubenswrapper[5123]: I1212 15:21:28.981377 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.001140 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4w7x\" (UniqueName: \"kubernetes.io/projected/1c109e0c-2708-45cf-8c8e-0489b41c9830-kube-api-access-w4w7x\") pod \"cluster-samples-operator-6b564684c8-dvgzb\" (UID: \"1c109e0c-2708-45cf-8c8e-0489b41c9830\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.027899 5123 request.go:752] "Waited before sending request" delay="1.02163382s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.031901 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.034550 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htnd\" (UniqueName: \"kubernetes.io/projected/c4465de2-5e85-451d-a998-dcff71c6d37c-kube-api-access-6htnd\") pod \"oauth-openshift-66458b6674-cqp44\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.036141 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dm2w\" (UniqueName: \"kubernetes.io/projected/09107a60-87da-4e17-9cc0-6dce06396ab6-kube-api-access-2dm2w\") pod \"downloads-747b44746d-xhd9t\" (UID: \"09107a60-87da-4e17-9cc0-6dce06396ab6\") " pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.043149 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.080403 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrnwb\" (UniqueName: \"kubernetes.io/projected/c1681a2f-153f-44c0-901e-e85b401d30ee-kube-api-access-rrnwb\") pod \"collect-profiles-29425875-lxsft\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.082539 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.102524 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.126705 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.141025 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.163442 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.181330 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.202449 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.217310 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.229947 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.231125 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.241902 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.243345 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.266766 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.280538 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.283631 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.304991 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.324146 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.347234 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.363495 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.485882 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.487885 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.492300 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.492829 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.493207 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.494982 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.834717 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.838991 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.839123 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.843520 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.843573 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.843519 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.843832 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.844014 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.844076 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.844417 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.844779 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.866946 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nx5j\" (UniqueName: \"kubernetes.io/projected/bf62556f-373c-41a0-96d4-8f431d629029-kube-api-access-8nx5j\") pod \"controller-manager-65b6cccf98-t4m4d\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.914435 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pxkl\" (UniqueName: \"kubernetes.io/projected/7e490e0b-11da-4093-bd3c-a328ebd6e304-kube-api-access-4pxkl\") pod \"dns-operator-799b87ffcd-kvxss\" (UID: \"7e490e0b-11da-4093-bd3c-a328ebd6e304\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.914952 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.915664 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.917816 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndtv\" (UniqueName: \"kubernetes.io/projected/5357a0b5-86ce-437b-b973-0bc2be3f85fd-kube-api-access-gndtv\") pod \"openshift-apiserver-operator-846cbfc458-kr28r\" (UID: \"5357a0b5-86ce-437b-b973-0bc2be3f85fd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.924905 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgj9w\" (UniqueName: \"kubernetes.io/projected/9da0a55f-2526-45cc-b820-1b31ce63745c-kube-api-access-hgj9w\") pod \"console-operator-67c89758df-vqqzf\" (UID: \"9da0a55f-2526-45cc-b820-1b31ce63745c\") " pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.925263 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.929066 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-48s86\" (UniqueName: \"kubernetes.io/projected/2c1e4fb9-bde9-46df-8ac0-c0b457ca767f-kube-api-access-48s86\") pod \"openshift-config-operator-5777786469-9j9pt\" (UID: \"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.929517 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rql7\" (UniqueName: \"kubernetes.io/projected/19fca7bf-f8d6-4e7c-b54d-e98292eb7efd-kube-api-access-9rql7\") pod \"etcd-operator-69b85846b6-7pgks\" (UID: \"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.931848 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj7bj\" (UniqueName: \"kubernetes.io/projected/7ff811e4-3864-456b-8e00-b9e2d1c49ed8-kube-api-access-lj7bj\") pod \"console-64d44f6ddf-96rdx\" (UID: \"7ff811e4-3864-456b-8e00-b9e2d1c49ed8\") " pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:29 crc kubenswrapper[5123]: I1212 15:21:29.924072 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.230649 5123 request.go:752] "Waited before sending request" delay="1.875038256s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.231005 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.232244 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.232585 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.233473 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.234047 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.234781 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.235463 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.236140 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.236925 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.237614 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.238476 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.239432 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.240046 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.240805 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.241092 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.242441 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.242821 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.243054 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.243267 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.360167 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.360601 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.360826 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.361237 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.392998 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.393416 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.414590 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqjb\" (UniqueName: \"kubernetes.io/projected/9f45ad41-b75a-4549-a242-88e737cb7698-kube-api-access-jmqjb\") pod \"kube-storage-version-migrator-operator-565b79b866-4hvhp\" (UID: \"9f45ad41-b75a-4549-a242-88e737cb7698\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.415314 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r5sc\" (UniqueName: \"kubernetes.io/projected/36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9-kube-api-access-4r5sc\") pod \"machine-approver-54c688565-4b7jt\" (UID: \"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.419760 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8zb\" (UniqueName: \"kubernetes.io/projected/45c4bae4-fd5a-46dc-b8ea-0915b2c5789e-kube-api-access-dr8zb\") pod \"openshift-controller-manager-operator-686468bdd5-qpz79\" (UID: \"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.454681 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bpvp\" (UniqueName: \"kubernetes.io/projected/e077c741-1ed0-4ffa-80a7-6ce54aab5fe0-kube-api-access-6bpvp\") pod \"apiserver-9ddfb9f55-bmckw\" (UID: \"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.489945 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490634 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490654 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-etcd-client\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490740 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-default-certificate\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490762 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/286dff49-96d3-4c06-aa40-a4168098880e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490836 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-tls\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490860 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfr9\" (UniqueName: \"kubernetes.io/projected/5254d27a-3c04-4921-b5e9-272cc901663d-kube-api-access-5rfr9\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490909 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-trusted-ca\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.490930 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-audit-policies\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491007 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920b381-c5d3-4a28-92dc-c092a8ffeb69-config\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491026 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c5hw\" (UniqueName: \"kubernetes.io/projected/ae911826-fe03-4967-bdf1-f1eb5fc10ea4-kube-api-access-5c5hw\") pod \"migrator-866fcbc849-t68lp\" (UID: \"ae911826-fe03-4967-bdf1-f1eb5fc10ea4\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491070 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-config\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491094 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9f4\" (UniqueName: \"kubernetes.io/projected/dd669a9c-af5d-4084-bda4-81a455d4c281-kube-api-access-jn9f4\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491178 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491244 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd669a9c-af5d-4084-bda4-81a455d4c281-service-ca-bundle\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491805 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491873 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a920b381-c5d3-4a28-92dc-c092a8ffeb69-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491904 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.491926 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-serving-cert\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.492191 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: E1212 15:21:30.492615 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:30.992590107 +0000 UTC m=+119.802542618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493010 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-tmp\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493061 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-installation-pull-secrets\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493092 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493124 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/286dff49-96d3-4c06-aa40-a4168098880e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493142 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-client-ca\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493195 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-trusted-ca-bundle\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493329 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493384 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75855\" (UniqueName: \"kubernetes.io/projected/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-kube-api-access-75855\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493407 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5254d27a-3c04-4921-b5e9-272cc901663d-serving-cert\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493449 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-certificates\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493472 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/735555bc-661a-4a48-a615-c88944194992-audit-dir\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493515 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbbb\" (UniqueName: \"kubernetes.io/projected/735555bc-661a-4a48-a615-c88944194992-kube-api-access-jrbbb\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493563 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a920b381-c5d3-4a28-92dc-c092a8ffeb69-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493601 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/286dff49-96d3-4c06-aa40-a4168098880e-serving-cert\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493664 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a920b381-c5d3-4a28-92dc-c092a8ffeb69-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493691 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-config\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493712 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286dff49-96d3-4c06-aa40-a4168098880e-config\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493743 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-encryption-config\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493779 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4fqw\" (UniqueName: \"kubernetes.io/projected/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-kube-api-access-z4fqw\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493800 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493823 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-etcd-serving-ca\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493846 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-ca-trust-extracted\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493865 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-bound-sa-token\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493902 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493930 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwrs\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-kube-api-access-hdwrs\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.493955 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-metrics-certs\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.494001 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.494028 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-stats-auth\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913087 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913299 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:30 crc kubenswrapper[5123]: E1212 15:21:30.913406 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.41337704 +0000 UTC m=+120.223329551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913777 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-trusted-ca-bundle\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913825 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913864 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75855\" (UniqueName: \"kubernetes.io/projected/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-kube-api-access-75855\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913884 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5254d27a-3c04-4921-b5e9-272cc901663d-serving-cert\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913904 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-certificates\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913919 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/735555bc-661a-4a48-a615-c88944194992-audit-dir\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913939 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbbb\" (UniqueName: \"kubernetes.io/projected/735555bc-661a-4a48-a615-c88944194992-kube-api-access-jrbbb\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913960 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a920b381-c5d3-4a28-92dc-c092a8ffeb69-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.913983 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/286dff49-96d3-4c06-aa40-a4168098880e-serving-cert\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914003 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a920b381-c5d3-4a28-92dc-c092a8ffeb69-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914023 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-config\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914039 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286dff49-96d3-4c06-aa40-a4168098880e-config\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914057 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-encryption-config\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914074 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4fqw\" (UniqueName: \"kubernetes.io/projected/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-kube-api-access-z4fqw\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914090 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914122 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-etcd-serving-ca\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914138 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-ca-trust-extracted\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914154 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-bound-sa-token\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914257 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914315 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwrs\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-kube-api-access-hdwrs\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.914348 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-metrics-certs\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.915572 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.919887 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-ca-trust-extracted\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.920086 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.920180 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/735555bc-661a-4a48-a615-c88944194992-audit-dir\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.920532 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-trusted-ca-bundle\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.921546 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-certificates\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.922143 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a920b381-c5d3-4a28-92dc-c092a8ffeb69-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.922156 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-config\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.923237 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286dff49-96d3-4c06-aa40-a4168098880e-config\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.924213 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.924594 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-etcd-serving-ca\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.924711 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.924709 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-stats-auth\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.925493 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-etcd-client\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.927724 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-serving-cert\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.927789 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-default-certificate\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.927818 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/286dff49-96d3-4c06-aa40-a4168098880e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.927878 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-tls\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928019 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfr9\" (UniqueName: \"kubernetes.io/projected/5254d27a-3c04-4921-b5e9-272cc901663d-kube-api-access-5rfr9\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928073 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-trusted-ca\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928099 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-audit-policies\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928122 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920b381-c5d3-4a28-92dc-c092a8ffeb69-config\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928145 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5c5hw\" (UniqueName: \"kubernetes.io/projected/ae911826-fe03-4967-bdf1-f1eb5fc10ea4-kube-api-access-5c5hw\") pod \"migrator-866fcbc849-t68lp\" (UID: \"ae911826-fe03-4967-bdf1-f1eb5fc10ea4\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928170 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-config\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928197 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9f4\" (UniqueName: \"kubernetes.io/projected/dd669a9c-af5d-4084-bda4-81a455d4c281-kube-api-access-jn9f4\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928290 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928316 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd669a9c-af5d-4084-bda4-81a455d4c281-service-ca-bundle\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928353 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928381 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a920b381-c5d3-4a28-92dc-c092a8ffeb69-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928422 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928449 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-serving-cert\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928542 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928605 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-tmp\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928663 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-installation-pull-secrets\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928696 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928727 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/286dff49-96d3-4c06-aa40-a4168098880e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928751 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-client-ca\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.928805 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/286dff49-96d3-4c06-aa40-a4168098880e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.930062 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-client-ca\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.935977 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-etcd-client\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.937608 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/735555bc-661a-4a48-a615-c88944194992-audit-policies\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.938561 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a920b381-c5d3-4a28-92dc-c092a8ffeb69-config\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.939687 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-config\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.940796 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a920b381-c5d3-4a28-92dc-c092a8ffeb69-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:30 crc kubenswrapper[5123]: E1212 15:21:30.940984 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.440964871 +0000 UTC m=+120.250917382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.941237 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5254d27a-3c04-4921-b5e9-272cc901663d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.942321 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-tmp\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.943008 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd669a9c-af5d-4084-bda4-81a455d4c281-service-ca-bundle\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.943535 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.946440 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.962714 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-trusted-ca\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.965139 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-encryption-config\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.976112 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-stats-auth\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.976459 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwrs\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-kube-api-access-hdwrs\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.976594 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4fqw\" (UniqueName: \"kubernetes.io/projected/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-kube-api-access-z4fqw\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.976382 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-default-certificate\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.980197 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-installation-pull-secrets\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:30 crc kubenswrapper[5123]: I1212 15:21:30.983939 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75855\" (UniqueName: \"kubernetes.io/projected/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-kube-api-access-75855\") pod \"route-controller-manager-776cdc94d6-dc699\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.001761 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-bound-sa-token\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.005654 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c5hw\" (UniqueName: \"kubernetes.io/projected/ae911826-fe03-4967-bdf1-f1eb5fc10ea4-kube-api-access-5c5hw\") pod \"migrator-866fcbc849-t68lp\" (UID: \"ae911826-fe03-4967-bdf1-f1eb5fc10ea4\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.008707 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5254d27a-3c04-4921-b5e9-272cc901663d-serving-cert\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.008826 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfr9\" (UniqueName: \"kubernetes.io/projected/5254d27a-3c04-4921-b5e9-272cc901663d-kube-api-access-5rfr9\") pod \"authentication-operator-7f5c659b84-nd2rm\" (UID: \"5254d27a-3c04-4921-b5e9-272cc901663d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.010398 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/286dff49-96d3-4c06-aa40-a4168098880e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.013586 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9f4\" (UniqueName: \"kubernetes.io/projected/dd669a9c-af5d-4084-bda4-81a455d4c281-kube-api-access-jn9f4\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.038177 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a920b381-c5d3-4a28-92dc-c092a8ffeb69-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-dtw8v\" (UID: \"a920b381-c5d3-4a28-92dc-c092a8ffeb69\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.087728 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.088566 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.088741 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.588686466 +0000 UTC m=+120.398638977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.089780 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-tmpfs\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.089993 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9d4713bf-88da-43eb-8dd8-2808e76b53c4-signing-key\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.090129 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjbs\" (UniqueName: \"kubernetes.io/projected/7b7460e4-e37e-4643-9956-8097d8258066-kube-api-access-nhjbs\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.090355 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-serving-cert\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.091553 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/735555bc-661a-4a48-a615-c88944194992-serving-cert\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.097378 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6eb483de-06e5-4975-b29a-7fd9bc7674a9-ready\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.098295 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.099939 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/626346f0-e585-4a37-8c9b-c6e36ee113bc-config-volume\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.101193 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632abe1b-1a43-457c-86db-62fdb0572a0e-images\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.101373 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-tmpfs\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.101428 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632abe1b-1a43-457c-86db-62fdb0572a0e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.104133 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.116699 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbftr\" (UniqueName: \"kubernetes.io/projected/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-kube-api-access-dbftr\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.117568 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77c05f1e-26be-4120-9eb2-0637d83f86af-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.117600 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44vvl\" (UniqueName: \"kubernetes.io/projected/6eb483de-06e5-4975-b29a-7fd9bc7674a9-kube-api-access-44vvl\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.117685 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-csi-data-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.117903 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/12e31d4b-fe5c-4f42-82f2-75389d8a34d6-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9qnbt\" (UID: \"12e31d4b-fe5c-4f42-82f2-75389d8a34d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.117961 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c05f1e-26be-4120-9eb2-0637d83f86af-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.118232 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9b2cf1e-7b13-44dc-8819-74f4bd24c609-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tmds4\" (UID: \"d9b2cf1e-7b13-44dc-8819-74f4bd24c609\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.118291 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6eb483de-06e5-4975-b29a-7fd9bc7674a9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.118699 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.118735 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7460e4-e37e-4643-9956-8097d8258066-profile-collector-cert\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.118913 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea13a1f7-48ed-40f9-b5d0-040f13d8f90e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-76wqd\" (UID: \"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.118962 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx5zg\" (UniqueName: \"kubernetes.io/projected/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-kube-api-access-vx5zg\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.119453 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9d4713bf-88da-43eb-8dd8-2808e76b53c4-signing-cabundle\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.119513 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-config\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.126476 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-plugins-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.127627 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-srv-cert\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.127669 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632abe1b-1a43-457c-86db-62fdb0572a0e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.127706 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-webhook-cert\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.127737 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkrq\" (UniqueName: \"kubernetes.io/projected/d22355c6-2b0f-4caa-aa4b-92bd124103ad-kube-api-access-wzkrq\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.127779 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-mountpoint-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.127813 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nts2m\" (UniqueName: \"kubernetes.io/projected/d9b2cf1e-7b13-44dc-8819-74f4bd24c609-kube-api-access-nts2m\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tmds4\" (UID: \"d9b2cf1e-7b13-44dc-8819-74f4bd24c609\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.127987 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5bd3e23-721c-45a0-be10-620b5a281623-webhook-certs\") pod \"multus-admission-controller-69db94689b-gg4kh\" (UID: \"b5bd3e23-721c-45a0-be10-620b5a281623\") " pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128015 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wldf\" (UniqueName: \"kubernetes.io/projected/b5bd3e23-721c-45a0-be10-620b5a281623-kube-api-access-7wldf\") pod \"multus-admission-controller-69db94689b-gg4kh\" (UID: \"b5bd3e23-721c-45a0-be10-620b5a281623\") " pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128054 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7460e4-e37e-4643-9956-8097d8258066-srv-cert\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128080 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6eb483de-06e5-4975-b29a-7fd9bc7674a9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128117 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77c05f1e-26be-4120-9eb2-0637d83f86af-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128139 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-apiservice-cert\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128169 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128288 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.128326 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7c6x\" (UniqueName: \"kubernetes.io/projected/17ce8feb-99e5-42f3-a808-2dd39bc57377-kube-api-access-q7c6x\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129020 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b7460e4-e37e-4643-9956-8097d8258066-tmpfs\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129057 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6qc\" (UniqueName: \"kubernetes.io/projected/9d4713bf-88da-43eb-8dd8-2808e76b53c4-kube-api-access-fx6qc\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129133 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d22355c6-2b0f-4caa-aa4b-92bd124103ad-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129195 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckf8\" (UniqueName: \"kubernetes.io/projected/409e180b-f9f6-41a7-bd20-51095ac1261a-kube-api-access-gckf8\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129257 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d22355c6-2b0f-4caa-aa4b-92bd124103ad-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129290 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-socket-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129402 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqlk\" (UniqueName: \"kubernetes.io/projected/68ef1469-eefc-4e7d-b8a5-bf0550b84694-kube-api-access-bgqlk\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129515 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/439fab76-d95a-43fc-b800-b540d053001d-cert\") pod \"ingress-canary-hkjk6\" (UID: \"439fab76-d95a-43fc-b800-b540d053001d\") " pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129592 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpw9\" (UniqueName: \"kubernetes.io/projected/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-kube-api-access-6vpw9\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129719 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/409e180b-f9f6-41a7-bd20-51095ac1261a-node-bootstrap-token\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129766 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2n7x\" (UniqueName: \"kubernetes.io/projected/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-kube-api-access-l2n7x\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.129967 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwfq\" (UniqueName: \"kubernetes.io/projected/626346f0-e585-4a37-8c9b-c6e36ee113bc-kube-api-access-ckwfq\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.130011 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqz8\" (UniqueName: \"kubernetes.io/projected/632abe1b-1a43-457c-86db-62fdb0572a0e-kube-api-access-cmqz8\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.130071 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.130127 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/409e180b-f9f6-41a7-bd20-51095ac1261a-certs\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.130166 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.130431 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17ce8feb-99e5-42f3-a808-2dd39bc57377-tmp\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.133096 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xkzn\" (UniqueName: \"kubernetes.io/projected/12e31d4b-fe5c-4f42-82f2-75389d8a34d6-kube-api-access-8xkzn\") pod \"package-server-manager-77f986bd66-9qnbt\" (UID: \"12e31d4b-fe5c-4f42-82f2-75389d8a34d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.133199 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.133423 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77c05f1e-26be-4120-9eb2-0637d83f86af-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.133483 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-registration-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.133558 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.133590 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/626346f0-e585-4a37-8c9b-c6e36ee113bc-metrics-tls\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.133619 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/626346f0-e585-4a37-8c9b-c6e36ee113bc-tmp-dir\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.134967 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xphh\" (UniqueName: \"kubernetes.io/projected/439fab76-d95a-43fc-b800-b540d053001d-kube-api-access-5xphh\") pod \"ingress-canary-hkjk6\" (UID: \"439fab76-d95a-43fc-b800-b540d053001d\") " pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.136641 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.636620574 +0000 UTC m=+120.446573085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.142329 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.160721 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.166728 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.208643 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd669a9c-af5d-4084-bda4-81a455d4c281-metrics-certs\") pod \"router-default-68cf44c8b8-cnq9c\" (UID: \"dd669a9c-af5d-4084-bda4-81a455d4c281\") " pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.211183 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-tls\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239207 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239431 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.239542 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.739482326 +0000 UTC m=+120.549434847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239662 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7460e4-e37e-4643-9956-8097d8258066-profile-collector-cert\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239698 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vx5zg\" (UniqueName: \"kubernetes.io/projected/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-kube-api-access-vx5zg\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239749 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9d4713bf-88da-43eb-8dd8-2808e76b53c4-signing-cabundle\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239794 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-config\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239813 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-plugins-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239846 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-srv-cert\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239869 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632abe1b-1a43-457c-86db-62fdb0572a0e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239894 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-webhook-cert\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239920 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkrq\" (UniqueName: \"kubernetes.io/projected/d22355c6-2b0f-4caa-aa4b-92bd124103ad-kube-api-access-wzkrq\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239949 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-mountpoint-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.239988 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nts2m\" (UniqueName: \"kubernetes.io/projected/d9b2cf1e-7b13-44dc-8819-74f4bd24c609-kube-api-access-nts2m\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tmds4\" (UID: \"d9b2cf1e-7b13-44dc-8819-74f4bd24c609\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.240011 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5bd3e23-721c-45a0-be10-620b5a281623-webhook-certs\") pod \"multus-admission-controller-69db94689b-gg4kh\" (UID: \"b5bd3e23-721c-45a0-be10-620b5a281623\") " pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242042 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/632abe1b-1a43-457c-86db-62fdb0572a0e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242437 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wldf\" (UniqueName: \"kubernetes.io/projected/b5bd3e23-721c-45a0-be10-620b5a281623-kube-api-access-7wldf\") pod \"multus-admission-controller-69db94689b-gg4kh\" (UID: \"b5bd3e23-721c-45a0-be10-620b5a281623\") " pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242453 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-mountpoint-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242528 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7460e4-e37e-4643-9956-8097d8258066-srv-cert\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242561 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6eb483de-06e5-4975-b29a-7fd9bc7674a9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242594 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77c05f1e-26be-4120-9eb2-0637d83f86af-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242618 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-apiservice-cert\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242641 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242803 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-plugins-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.242973 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6eb483de-06e5-4975-b29a-7fd9bc7674a9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243045 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243099 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7c6x\" (UniqueName: \"kubernetes.io/projected/17ce8feb-99e5-42f3-a808-2dd39bc57377-kube-api-access-q7c6x\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243128 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b7460e4-e37e-4643-9956-8097d8258066-tmpfs\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243163 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6qc\" (UniqueName: \"kubernetes.io/projected/9d4713bf-88da-43eb-8dd8-2808e76b53c4-kube-api-access-fx6qc\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243184 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77c05f1e-26be-4120-9eb2-0637d83f86af-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243200 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d22355c6-2b0f-4caa-aa4b-92bd124103ad-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243325 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gckf8\" (UniqueName: \"kubernetes.io/projected/409e180b-f9f6-41a7-bd20-51095ac1261a-kube-api-access-gckf8\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243356 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d22355c6-2b0f-4caa-aa4b-92bd124103ad-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243380 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-socket-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243411 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqlk\" (UniqueName: \"kubernetes.io/projected/68ef1469-eefc-4e7d-b8a5-bf0550b84694-kube-api-access-bgqlk\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243440 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/439fab76-d95a-43fc-b800-b540d053001d-cert\") pod \"ingress-canary-hkjk6\" (UID: \"439fab76-d95a-43fc-b800-b540d053001d\") " pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243477 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vpw9\" (UniqueName: \"kubernetes.io/projected/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-kube-api-access-6vpw9\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243524 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/409e180b-f9f6-41a7-bd20-51095ac1261a-node-bootstrap-token\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243550 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l2n7x\" (UniqueName: \"kubernetes.io/projected/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-kube-api-access-l2n7x\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243573 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckwfq\" (UniqueName: \"kubernetes.io/projected/626346f0-e585-4a37-8c9b-c6e36ee113bc-kube-api-access-ckwfq\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243599 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmqz8\" (UniqueName: \"kubernetes.io/projected/632abe1b-1a43-457c-86db-62fdb0572a0e-kube-api-access-cmqz8\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243642 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-socket-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243678 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243736 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/409e180b-f9f6-41a7-bd20-51095ac1261a-certs\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243766 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243802 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17ce8feb-99e5-42f3-a808-2dd39bc57377-tmp\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243809 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b7460e4-e37e-4643-9956-8097d8258066-tmpfs\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.243849 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xkzn\" (UniqueName: \"kubernetes.io/projected/12e31d4b-fe5c-4f42-82f2-75389d8a34d6-kube-api-access-8xkzn\") pod \"package-server-manager-77f986bd66-9qnbt\" (UID: \"12e31d4b-fe5c-4f42-82f2-75389d8a34d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.244090 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9d4713bf-88da-43eb-8dd8-2808e76b53c4-signing-cabundle\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.244928 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d22355c6-2b0f-4caa-aa4b-92bd124103ad-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.245235 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-config\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.245461 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-webhook-cert\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.245468 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246091 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17ce8feb-99e5-42f3-a808-2dd39bc57377-tmp\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.246173 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.744525994 +0000 UTC m=+120.554478505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246235 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246299 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77c05f1e-26be-4120-9eb2-0637d83f86af-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246480 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-registration-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246535 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246574 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/626346f0-e585-4a37-8c9b-c6e36ee113bc-metrics-tls\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246609 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/626346f0-e585-4a37-8c9b-c6e36ee113bc-tmp-dir\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246658 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xphh\" (UniqueName: \"kubernetes.io/projected/439fab76-d95a-43fc-b800-b540d053001d-kube-api-access-5xphh\") pod \"ingress-canary-hkjk6\" (UID: \"439fab76-d95a-43fc-b800-b540d053001d\") " pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246688 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-tmpfs\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246713 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9d4713bf-88da-43eb-8dd8-2808e76b53c4-signing-key\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246747 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjbs\" (UniqueName: \"kubernetes.io/projected/7b7460e4-e37e-4643-9956-8097d8258066-kube-api-access-nhjbs\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246822 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-serving-cert\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246872 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6eb483de-06e5-4975-b29a-7fd9bc7674a9-ready\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246907 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/626346f0-e585-4a37-8c9b-c6e36ee113bc-config-volume\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246931 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632abe1b-1a43-457c-86db-62fdb0572a0e-images\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246965 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-tmpfs\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.246993 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632abe1b-1a43-457c-86db-62fdb0572a0e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247039 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dbftr\" (UniqueName: \"kubernetes.io/projected/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-kube-api-access-dbftr\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247063 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77c05f1e-26be-4120-9eb2-0637d83f86af-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247091 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-44vvl\" (UniqueName: \"kubernetes.io/projected/6eb483de-06e5-4975-b29a-7fd9bc7674a9-kube-api-access-44vvl\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247123 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-csi-data-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247180 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/12e31d4b-fe5c-4f42-82f2-75389d8a34d6-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9qnbt\" (UID: \"12e31d4b-fe5c-4f42-82f2-75389d8a34d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247240 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c05f1e-26be-4120-9eb2-0637d83f86af-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247304 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9b2cf1e-7b13-44dc-8819-74f4bd24c609-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tmds4\" (UID: \"d9b2cf1e-7b13-44dc-8819-74f4bd24c609\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247371 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6eb483de-06e5-4975-b29a-7fd9bc7674a9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.247515 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7460e4-e37e-4643-9956-8097d8258066-srv-cert\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.248193 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.248604 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6eb483de-06e5-4975-b29a-7fd9bc7674a9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.249120 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-csi-data-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.249261 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77c05f1e-26be-4120-9eb2-0637d83f86af-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.249554 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-tmpfs\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.249768 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5bd3e23-721c-45a0-be10-620b5a281623-webhook-certs\") pod \"multus-admission-controller-69db94689b-gg4kh\" (UID: \"b5bd3e23-721c-45a0-be10-620b5a281623\") " pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.250660 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-srv-cert\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.250858 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-tmpfs\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.251332 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/626346f0-e585-4a37-8c9b-c6e36ee113bc-tmp-dir\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.251631 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/626346f0-e585-4a37-8c9b-c6e36ee113bc-config-volume\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.251831 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.252327 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7460e4-e37e-4643-9956-8097d8258066-profile-collector-cert\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.253333 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/409e180b-f9f6-41a7-bd20-51095ac1261a-node-bootstrap-token\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.271618 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/632abe1b-1a43-457c-86db-62fdb0572a0e-images\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.282177 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68ef1469-eefc-4e7d-b8a5-bf0550b84694-registration-dir\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.397794 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.398174 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.898119531 +0000 UTC m=+120.708072052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.398410 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.398971 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:31.898940777 +0000 UTC m=+120.708893358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.681023 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.681920 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-apiservice-cert\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.682448 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6eb483de-06e5-4975-b29a-7fd9bc7674a9-ready\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.683009 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9b2cf1e-7b13-44dc-8819-74f4bd24c609-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tmds4\" (UID: \"d9b2cf1e-7b13-44dc-8819-74f4bd24c609\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.683089 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.683262 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.684168 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.184149375 +0000 UTC m=+120.994101886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.694323 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.694753 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-serving-cert\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.696533 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wldf\" (UniqueName: \"kubernetes.io/projected/b5bd3e23-721c-45a0-be10-620b5a281623-kube-api-access-7wldf\") pod \"multus-admission-controller-69db94689b-gg4kh\" (UID: \"b5bd3e23-721c-45a0-be10-620b5a281623\") " pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.701851 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjbs\" (UniqueName: \"kubernetes.io/projected/7b7460e4-e37e-4643-9956-8097d8258066-kube-api-access-nhjbs\") pod \"olm-operator-5cdf44d969-pj4ts\" (UID: \"7b7460e4-e37e-4643-9956-8097d8258066\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.713346 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.713476 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/632abe1b-1a43-457c-86db-62fdb0572a0e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.718862 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbbb\" (UniqueName: \"kubernetes.io/projected/735555bc-661a-4a48-a615-c88944194992-kube-api-access-jrbbb\") pod \"apiserver-8596bd845d-qvmj6\" (UID: \"735555bc-661a-4a48-a615-c88944194992\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.719434 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77c05f1e-26be-4120-9eb2-0637d83f86af-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.722871 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vpw9\" (UniqueName: \"kubernetes.io/projected/f4afdf33-53ee-4eeb-83a3-a5a0dc656922-kube-api-access-6vpw9\") pod \"packageserver-7d4fc7d867-hznms\" (UID: \"f4afdf33-53ee-4eeb-83a3-a5a0dc656922\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.723643 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d22355c6-2b0f-4caa-aa4b-92bd124103ad-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.727487 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/286dff49-96d3-4c06-aa40-a4168098880e-serving-cert\") pod \"kube-apiserver-operator-575994946d-lx5l5\" (UID: \"286dff49-96d3-4c06-aa40-a4168098880e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.729106 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6qc\" (UniqueName: \"kubernetes.io/projected/9d4713bf-88da-43eb-8dd8-2808e76b53c4-kube-api-access-fx6qc\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.730971 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckwfq\" (UniqueName: \"kubernetes.io/projected/626346f0-e585-4a37-8c9b-c6e36ee113bc-kube-api-access-ckwfq\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.731669 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nts2m\" (UniqueName: \"kubernetes.io/projected/d9b2cf1e-7b13-44dc-8819-74f4bd24c609-kube-api-access-nts2m\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tmds4\" (UID: \"d9b2cf1e-7b13-44dc-8819-74f4bd24c609\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.734266 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2n7x\" (UniqueName: \"kubernetes.io/projected/6bf5e136-4d51-49ba-bb1f-3e4fd5c82154-kube-api-access-l2n7x\") pod \"catalog-operator-75ff9f647d-c6l4m\" (UID: \"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.748750 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/12e31d4b-fe5c-4f42-82f2-75389d8a34d6-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9qnbt\" (UID: \"12e31d4b-fe5c-4f42-82f2-75389d8a34d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.750906 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkrq\" (UniqueName: \"kubernetes.io/projected/d22355c6-2b0f-4caa-aa4b-92bd124103ad-kube-api-access-wzkrq\") pod \"machine-config-controller-f9cdd68f7-t8xgq\" (UID: \"d22355c6-2b0f-4caa-aa4b-92bd124103ad\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.751459 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.752587 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/409e180b-f9f6-41a7-bd20-51095ac1261a-certs\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.753019 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-44vvl\" (UniqueName: \"kubernetes.io/projected/6eb483de-06e5-4975-b29a-7fd9bc7674a9-kube-api-access-44vvl\") pod \"cni-sysctl-allowlist-ds-mdpg8\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.753441 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.761126 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/439fab76-d95a-43fc-b800-b540d053001d-cert\") pod \"ingress-canary-hkjk6\" (UID: \"439fab76-d95a-43fc-b800-b540d053001d\") " pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.761651 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.761916 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9d4713bf-88da-43eb-8dd8-2808e76b53c4-signing-key\") pod \"service-ca-74545575db-qpxdh\" (UID: \"9d4713bf-88da-43eb-8dd8-2808e76b53c4\") " pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.762015 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.762082 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.772004 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.772645 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.774827 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.775190 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.780558 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xkzn\" (UniqueName: \"kubernetes.io/projected/12e31d4b-fe5c-4f42-82f2-75389d8a34d6-kube-api-access-8xkzn\") pod \"package-server-manager-77f986bd66-9qnbt\" (UID: \"12e31d4b-fe5c-4f42-82f2-75389d8a34d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.783921 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.784327 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.786584 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.787067 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.287048309 +0000 UTC m=+121.097000810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.787167 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77c05f1e-26be-4120-9eb2-0637d83f86af-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dtknn\" (UID: \"77c05f1e-26be-4120-9eb2-0637d83f86af\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.788658 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.809337 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/626346f0-e585-4a37-8c9b-c6e36ee113bc-metrics-tls\") pod \"dns-default-jd7j9\" (UID: \"626346f0-e585-4a37-8c9b-c6e36ee113bc\") " pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.810683 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xphh\" (UniqueName: \"kubernetes.io/projected/439fab76-d95a-43fc-b800-b540d053001d-kube-api-access-5xphh\") pod \"ingress-canary-hkjk6\" (UID: \"439fab76-d95a-43fc-b800-b540d053001d\") " pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.839887 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7c6x\" (UniqueName: \"kubernetes.io/projected/17ce8feb-99e5-42f3-a808-2dd39bc57377-kube-api-access-q7c6x\") pod \"marketplace-operator-547dbd544d-rkcvb\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.848446 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.848822 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.849320 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbftr\" (UniqueName: \"kubernetes.io/projected/788dd005-94a6-4a05-a0ce-c4dabe8dc04e-kube-api-access-dbftr\") pod \"ingress-operator-6b9cb4dbcf-bbdv4\" (UID: \"788dd005-94a6-4a05-a0ce-c4dabe8dc04e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.850290 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqlk\" (UniqueName: \"kubernetes.io/projected/68ef1469-eefc-4e7d-b8a5-bf0550b84694-kube-api-access-bgqlk\") pod \"csi-hostpathplugin-g9nc4\" (UID: \"68ef1469-eefc-4e7d-b8a5-bf0550b84694\") " pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.850825 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmqz8\" (UniqueName: \"kubernetes.io/projected/632abe1b-1a43-457c-86db-62fdb0572a0e-kube-api-access-cmqz8\") pod \"machine-config-operator-67c9d58cbb-mvm2v\" (UID: \"632abe1b-1a43-457c-86db-62fdb0572a0e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.851066 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.855001 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx5zg\" (UniqueName: \"kubernetes.io/projected/5e31e050-9a37-4e9b-8c0e-3fc2ed640421-kube-api-access-vx5zg\") pod \"service-ca-operator-5b9c976747-9t6q7\" (UID: \"5e31e050-9a37-4e9b-8c0e-3fc2ed640421\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.855055 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.855444 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.857398 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.858458 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.865615 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.867397 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.881948 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.897376 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.897501 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.906349 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:31 crc kubenswrapper[5123]: E1212 15:21:31.907014 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.406970954 +0000 UTC m=+121.216923475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.929418 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.931433 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckf8\" (UniqueName: \"kubernetes.io/projected/409e180b-f9f6-41a7-bd20-51095ac1261a-kube-api-access-gckf8\") pod \"machine-config-server-pxgwd\" (UID: \"409e180b-f9f6-41a7-bd20-51095ac1261a\") " pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.936367 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.942368 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.942795 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.943191 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.943306 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.945316 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.959329 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 15:21:31 crc kubenswrapper[5123]: I1212 15:21:31.972419 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-qpxdh" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:31.983453 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:31.991723 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jd7j9" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.011480 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:32 crc kubenswrapper[5123]: E1212 15:21:32.012026 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.512003236 +0000 UTC m=+121.321955747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:32 crc kubenswrapper[5123]: W1212 15:21:32.022424 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36a3f9f0_c1ab_4411_a9e8_7795ab55a6e9.slice/crio-ad50cf64097d4d475f73d7b1ea0dac02a5b9133eb3d771639c61a7ecfb89dc40 WatchSource:0}: Error finding container ad50cf64097d4d475f73d7b1ea0dac02a5b9133eb3d771639c61a7ecfb89dc40: Status 404 returned error can't find the container with id ad50cf64097d4d475f73d7b1ea0dac02a5b9133eb3d771639c61a7ecfb89dc40 Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.041792 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.048447 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.049202 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.073827 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.074370 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hkjk6" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.081276 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.088158 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pxgwd" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.117334 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:32 crc kubenswrapper[5123]: E1212 15:21:32.118000 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:32.617976185 +0000 UTC m=+121.427928696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.126709 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.139022 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.216791 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-68259"] Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.585429 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.585522 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.585608 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.585999 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:32 crc kubenswrapper[5123]: E1212 15:21:32.586425 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:33.086406097 +0000 UTC m=+121.896358608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.589276 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.596292 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.596503 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.596566 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.601896 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.607793 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.609536 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.635024 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft"] Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.635110 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-xhd9t"] Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.653196 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.655505 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.689597 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:32 crc kubenswrapper[5123]: E1212 15:21:32.690082 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:33.190060395 +0000 UTC m=+122.000012906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.702203 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.765942 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb"] Dec 12 15:21:32 crc kubenswrapper[5123]: W1212 15:21:32.772786 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9368bb85_0c25_4d7d_884c_7ebea4cf3336.slice/crio-b56e39dd6ff2479db900af761eb7eea82a89b85a7385fc78466d683701a82bab WatchSource:0}: Error finding container b56e39dd6ff2479db900af761eb7eea82a89b85a7385fc78466d683701a82bab: Status 404 returned error can't find the container with id b56e39dd6ff2479db900af761eb7eea82a89b85a7385fc78466d683701a82bab Dec 12 15:21:32 crc kubenswrapper[5123]: I1212 15:21:32.792262 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:32 crc kubenswrapper[5123]: E1212 15:21:32.792858 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:33.292836744 +0000 UTC m=+122.102789255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.314106 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.314762 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.314810 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.333921 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 15:21:33 crc kubenswrapper[5123]: E1212 15:21:33.334744 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.334697009 +0000 UTC m=+123.144649520 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.338602 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.356734 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.358380 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3a697-51e4-44dd-a38c-3287db85ce50-metrics-certs\") pod \"network-metrics-daemon-hmprz\" (UID: \"e6c3a697-51e4-44dd-a38c-3287db85ce50\") " pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.359019 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-68259" event={"ID":"9368bb85-0c25-4d7d-884c-7ebea4cf3336","Type":"ContainerStarted","Data":"b56e39dd6ff2479db900af761eb7eea82a89b85a7385fc78466d683701a82bab"} Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.377185 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" event={"ID":"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9","Type":"ContainerStarted","Data":"ad50cf64097d4d475f73d7b1ea0dac02a5b9133eb3d771639c61a7ecfb89dc40"} Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.378685 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" event={"ID":"dd669a9c-af5d-4084-bda4-81a455d4c281","Type":"ContainerStarted","Data":"7eac06b548746087ec27c6dbebe646a68f0a60cfea6a2f9c995c197437cbc7b8"} Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.491365 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:33 crc kubenswrapper[5123]: E1212 15:21:33.491728 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:33.991688062 +0000 UTC m=+122.801640573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.491990 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:33 crc kubenswrapper[5123]: E1212 15:21:33.503025 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.003002646 +0000 UTC m=+122.812955157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.521962 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.530301 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hmprz" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.597733 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.599127 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:33 crc kubenswrapper[5123]: E1212 15:21:33.599832 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.09979836 +0000 UTC m=+122.909750871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:33 crc kubenswrapper[5123]: I1212 15:21:33.701555 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:33 crc kubenswrapper[5123]: E1212 15:21:33.702168 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.202150067 +0000 UTC m=+123.012102578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.009654 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.009847 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.509809236 +0000 UTC m=+123.319761747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.010430 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.010981 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.510972263 +0000 UTC m=+123.320924774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.117465 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.117712 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.617667035 +0000 UTC m=+123.427619556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.118107 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.118723 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.618711907 +0000 UTC m=+123.428664418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.219093 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.219515 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.719486115 +0000 UTC m=+123.529438626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.337309 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.337885 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.837840502 +0000 UTC m=+123.647793013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.438940 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.439262 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.939205838 +0000 UTC m=+123.749158349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.439331 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.439796 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:34.939787977 +0000 UTC m=+123.749740488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.540699 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.541492 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:35.041459012 +0000 UTC m=+123.851411523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.789254 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.789750 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:35.289732887 +0000 UTC m=+124.099685408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.793541 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" event={"ID":"6eb483de-06e5-4975-b29a-7fd9bc7674a9","Type":"ContainerStarted","Data":"6e0fc4e8fe9f057c2796b5c395dbb8c0c5d5ccda1bf3c99820bf70afaf916bc3"} Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.802602 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" event={"ID":"c1681a2f-153f-44c0-901e-e85b401d30ee","Type":"ContainerStarted","Data":"b5bc3f05fcb1923b781ed66d910c11a30a8f551e09130d4b1a72be67fcfd51e5"} Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.804071 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerStarted","Data":"2f9b758f9aabf40ad453a23af2d534b019acc52d9886d6c31ee6a6966c979544"} Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.805006 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pxgwd" event={"ID":"409e180b-f9f6-41a7-bd20-51095ac1261a","Type":"ContainerStarted","Data":"4059aede2bdac740a93325959bff3a7c4505555a75da80bf8686beb619896915"} Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.890406 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.890826 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:35.390689111 +0000 UTC m=+124.200641622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.891167 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.891722 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:35.391694711 +0000 UTC m=+124.201647242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:34 crc kubenswrapper[5123]: I1212 15:21:34.992566 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:34 crc kubenswrapper[5123]: E1212 15:21:34.992928 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:35.492906333 +0000 UTC m=+124.302858844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:35 crc kubenswrapper[5123]: I1212 15:21:35.246304 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:35 crc kubenswrapper[5123]: E1212 15:21:35.246754 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:35.746738491 +0000 UTC m=+124.556691012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:35 crc kubenswrapper[5123]: I1212 15:21:35.619967 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:35 crc kubenswrapper[5123]: E1212 15:21:35.620738 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:36.120713962 +0000 UTC m=+124.930666463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:35 crc kubenswrapper[5123]: I1212 15:21:35.727179 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:35 crc kubenswrapper[5123]: E1212 15:21:35.728000 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:36.227978843 +0000 UTC m=+125.037931354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:35 crc kubenswrapper[5123]: I1212 15:21:35.845888 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:35 crc kubenswrapper[5123]: E1212 15:21:35.846239 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:36.346195135 +0000 UTC m=+125.156147646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:35 crc kubenswrapper[5123]: I1212 15:21:35.855631 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" event={"ID":"1c109e0c-2708-45cf-8c8e-0489b41c9830","Type":"ContainerStarted","Data":"2851f93abc6afffe77cd5ac84d0258d1699bc770a5886ebaf1d5fb04f489719c"} Dec 12 15:21:35 crc kubenswrapper[5123]: I1212 15:21:35.960925 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:35 crc kubenswrapper[5123]: E1212 15:21:35.961562 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:36.461540448 +0000 UTC m=+125.271492969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:36 crc kubenswrapper[5123]: I1212 15:21:36.349499 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:36 crc kubenswrapper[5123]: E1212 15:21:36.349957 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:36.849926499 +0000 UTC m=+125.659879010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:36 crc kubenswrapper[5123]: I1212 15:21:36.624948 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:36 crc kubenswrapper[5123]: E1212 15:21:36.625517 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:37.125493386 +0000 UTC m=+125.935445897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:36 crc kubenswrapper[5123]: I1212 15:21:36.728143 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:36 crc kubenswrapper[5123]: E1212 15:21:36.728571 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:37.228541955 +0000 UTC m=+126.038494466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:36 crc kubenswrapper[5123]: I1212 15:21:36.830096 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:36 crc kubenswrapper[5123]: E1212 15:21:36.830937 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:37.330905213 +0000 UTC m=+126.140857724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:37 crc kubenswrapper[5123]: I1212 15:21:37.189955 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:37 crc kubenswrapper[5123]: E1212 15:21:37.190912 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:37.690883346 +0000 UTC m=+126.500835857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:37 crc kubenswrapper[5123]: I1212 15:21:37.335313 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:37 crc kubenswrapper[5123]: E1212 15:21:37.335998 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:37.835944727 +0000 UTC m=+126.645897318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:37 crc kubenswrapper[5123]: I1212 15:21:37.447611 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:37 crc kubenswrapper[5123]: E1212 15:21:37.448281 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:37.948237604 +0000 UTC m=+126.758190155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:37 crc kubenswrapper[5123]: I1212 15:21:37.961011 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:37 crc kubenswrapper[5123]: E1212 15:21:37.961524 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:38.961487476 +0000 UTC m=+127.771439987 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:37 crc kubenswrapper[5123]: I1212 15:21:37.961819 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:37 crc kubenswrapper[5123]: E1212 15:21:37.962464 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:38.462450075 +0000 UTC m=+127.272402586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:38 crc kubenswrapper[5123]: I1212 15:21:38.078030 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:38 crc kubenswrapper[5123]: E1212 15:21:38.078680 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:38.578652516 +0000 UTC m=+127.388605027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.279162 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.279836 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.279812223 +0000 UTC m=+129.089764734 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.389067 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.391485 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:39.891459361 +0000 UTC m=+128.701411872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.465347 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podStartSLOduration=95.465314258 podStartE2EDuration="1m35.465314258s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:39.46380805 +0000 UTC m=+128.273760591" watchObservedRunningTime="2025-12-12 15:21:39.465314258 +0000 UTC m=+128.275266769" Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.490305 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.491981 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:39.991933159 +0000 UTC m=+128.801885680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.501769 5123 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.543s" Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.509001 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" event={"ID":"dd669a9c-af5d-4084-bda4-81a455d4c281","Type":"ContainerStarted","Data":"0d68c3ace3ead1beceea20a5cd5f5af63bf117c54966982bfcff063927d0fc64"} Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.509062 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-68259" event={"ID":"9368bb85-0c25-4d7d-884c-7ebea4cf3336","Type":"ContainerStarted","Data":"bd9abe33e7f49eff9a547c1244187e0955f1e1e8e768b4e8d8dd4ef580fce31a"} Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.564003 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.592211 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.592904 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.092880802 +0000 UTC m=+128.902833313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.682472 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.693606 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.694038 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.194009301 +0000 UTC m=+129.003961812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.708372 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.708468 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.797504 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.799148 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.299121393 +0000 UTC m=+129.109073904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:39 crc kubenswrapper[5123]: I1212 15:21:39.901112 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:39 crc kubenswrapper[5123]: E1212 15:21:39.901649 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.401598685 +0000 UTC m=+129.211551196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.011336 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.011809 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.511792116 +0000 UTC m=+129.321744627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.116324 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.116839 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.616812037 +0000 UTC m=+129.426764558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.295331 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.295846 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.795822338 +0000 UTC m=+129.605774849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.399130 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.399573 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:40.899546758 +0000 UTC m=+129.709499269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.501255 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.505177 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:41.005160986 +0000 UTC m=+129.815113497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.602817 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.603244 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:41.103203809 +0000 UTC m=+129.913156320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.795575 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.796399 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:41.296369852 +0000 UTC m=+130.106322363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.823959 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" event={"ID":"c1681a2f-153f-44c0-901e-e85b401d30ee","Type":"ContainerStarted","Data":"07b5552f5fab61bda1ddf82f4c9d2105ed681233d697b33ac3c9e4da459feeea"} Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.837826 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" event={"ID":"1c109e0c-2708-45cf-8c8e-0489b41c9830","Type":"ContainerStarted","Data":"00baaf496af6a3dd578a212c282584939d205cd71c30b5c36f188967f0d692bf"} Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.881797 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerStarted","Data":"9f50991963d4d04bcf2e4c9451b3fae1c9ded45ced042c68035628b937492228"} Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.895661 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.897662 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:40 crc kubenswrapper[5123]: E1212 15:21:40.898205 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:41.398177172 +0000 UTC m=+130.208129683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.914926 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" event={"ID":"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9","Type":"ContainerStarted","Data":"047da46522032fd693404b0ee4320117c9fad1bac0ac16d30e7e2be34fcc45fe"} Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.917742 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pxgwd" event={"ID":"409e180b-f9f6-41a7-bd20-51095ac1261a","Type":"ContainerStarted","Data":"d057a84e159bf254a68ed539c5257b90fc87b3d7fd97c2487799509294bec303"} Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.950006 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:21:40 crc kubenswrapper[5123]: I1212 15:21:40.950115 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.001866 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:41 crc kubenswrapper[5123]: E1212 15:21:41.002356 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:41.502336956 +0000 UTC m=+130.312289467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.012560 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" podStartSLOduration=98.012528474 podStartE2EDuration="1m38.012528474s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:41.009073766 +0000 UTC m=+129.819026277" watchObservedRunningTime="2025-12-12 15:21:41.012528474 +0000 UTC m=+129.822480985" Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.103634 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:41 crc kubenswrapper[5123]: E1212 15:21:41.110444 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:41.610412151 +0000 UTC m=+130.420364662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.206000 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:41 crc kubenswrapper[5123]: E1212 15:21:41.206751 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:41.7067271 +0000 UTC m=+130.516679611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.245620 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-xhd9t" podStartSLOduration=97.245577993 podStartE2EDuration="1m37.245577993s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:41.216081642 +0000 UTC m=+130.026034163" watchObservedRunningTime="2025-12-12 15:21:41.245577993 +0000 UTC m=+130.055530504" Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.942597 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.943052 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.943284 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.943598 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:41 crc kubenswrapper[5123]: E1212 15:21:41.943710 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:42.943686118 +0000 UTC m=+131.753638639 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.962294 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pxgwd" podStartSLOduration=14.962270479 podStartE2EDuration="14.962270479s" podCreationTimestamp="2025-12-12 15:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:41.248569427 +0000 UTC m=+130.058521938" watchObservedRunningTime="2025-12-12 15:21:41.962270479 +0000 UTC m=+130.772222990" Dec 12 15:21:41 crc kubenswrapper[5123]: I1212 15:21:41.965837 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" podStartSLOduration=15.96582675 podStartE2EDuration="15.96582675s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:41.963628271 +0000 UTC m=+130.773580802" watchObservedRunningTime="2025-12-12 15:21:41.96582675 +0000 UTC m=+130.775779261" Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.012950 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.013100 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.061197 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.062823 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:42.562799259 +0000 UTC m=+131.372751770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.072269 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-68259" podStartSLOduration=98.072210383 podStartE2EDuration="1m38.072210383s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:42.045378605 +0000 UTC m=+130.855331136" watchObservedRunningTime="2025-12-12 15:21:42.072210383 +0000 UTC m=+130.882162894" Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.164705 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.165716 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:42.665694683 +0000 UTC m=+131.475647194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.297701 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.299160 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:42.7991188 +0000 UTC m=+131.609071311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.322606 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-68259" event={"ID":"9368bb85-0c25-4d7d-884c-7ebea4cf3336","Type":"ContainerStarted","Data":"636898b9480f4ff4e2316a49e28865d2ee57b2b5208ef98804be3ec6803349b3"} Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.322907 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" event={"ID":"6eb483de-06e5-4975-b29a-7fd9bc7674a9","Type":"ContainerStarted","Data":"98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f"} Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.323797 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.401692 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.404770 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:42.904715189 +0000 UTC m=+131.714667700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.559853 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.560283 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.060256247 +0000 UTC m=+131.870208758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.662585 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.672789 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.172755531 +0000 UTC m=+131.982708042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.776079 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.777279 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.777699 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.277661717 +0000 UTC m=+132.087614228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.879850 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:42 crc kubenswrapper[5123]: E1212 15:21:42.880374 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.380352935 +0000 UTC m=+132.190305446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.943336 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:21:42 crc kubenswrapper[5123]: I1212 15:21:42.943437 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.011836 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.012473 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.512445841 +0000 UTC m=+132.322398342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.016699 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.016769 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.113914 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.114441 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.614422056 +0000 UTC m=+132.424374567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.215705 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.216870 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.716834275 +0000 UTC m=+132.526786786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.318341 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.318883 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.818862911 +0000 UTC m=+132.628815422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.478929 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.479934 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.979905652 +0000 UTC m=+132.789858173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.480052 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.480437 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:43.980424698 +0000 UTC m=+132.790377209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.591778 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.592392 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:44.092363854 +0000 UTC m=+132.902316365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.730435 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.731317 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:44.231298344 +0000 UTC m=+133.041250855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.831962 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.832523 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:44.332494365 +0000 UTC m=+133.142446876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.877443 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mdpg8"] Dec 12 15:21:43 crc kubenswrapper[5123]: I1212 15:21:43.933858 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:43 crc kubenswrapper[5123]: E1212 15:21:43.934500 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:44.43447931 +0000 UTC m=+133.244431821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.029925 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-cqp44"] Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.035913 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.036187 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r"] Dec 12 15:21:44 crc kubenswrapper[5123]: E1212 15:21:44.036366 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:44.536325431 +0000 UTC m=+133.346277942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.064877 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" event={"ID":"1c109e0c-2708-45cf-8c8e-0489b41c9830","Type":"ContainerStarted","Data":"c0ce3abc247db344cd7c8e5c461e28815e8b4a9487e05b67393129f319a1c0bc"} Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.079460 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-t4m4d"] Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.086265 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" event={"ID":"36a3f9f0-c1ab-4411-a9e8-7795ab55a6e9","Type":"ContainerStarted","Data":"1a53d4487133a6d9ee75bf5efea3fbd68df2a668913d73875398cde4cb9982c0"} Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.089278 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" event={"ID":"c4465de2-5e85-451d-a998-dcff71c6d37c","Type":"ContainerStarted","Data":"41191cb8b32cf2147eba77a5a97493110dfdafc3d28f1fa4b134483b033f8101"} Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.105407 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-dvgzb" podStartSLOduration=100.105375338 podStartE2EDuration="1m40.105375338s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:44.100352221 +0000 UTC m=+132.910304752" watchObservedRunningTime="2025-12-12 15:21:44.105375338 +0000 UTC m=+132.915327849" Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.128824 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.140975 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:44 crc kubenswrapper[5123]: E1212 15:21:44.141473 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:44.641447754 +0000 UTC m=+133.451400265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.156632 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:44 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:44 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:44 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.156708 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.171407 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13"} Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.172064 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.493243 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:44 crc kubenswrapper[5123]: E1212 15:21:44.493630 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:44.993603414 +0000 UTC m=+133.803555925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.509439 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4b7jt" podStartSLOduration=101.509413468 podStartE2EDuration="1m41.509413468s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:44.506185717 +0000 UTC m=+133.316138258" watchObservedRunningTime="2025-12-12 15:21:44.509413468 +0000 UTC m=+133.319365979" Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.580648 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=45.580617391 podStartE2EDuration="45.580617391s" podCreationTimestamp="2025-12-12 15:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:44.565454289 +0000 UTC m=+133.375406810" watchObservedRunningTime="2025-12-12 15:21:44.580617391 +0000 UTC m=+133.390569902" Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.620336 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:44 crc kubenswrapper[5123]: E1212 15:21:44.623667 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:45.123628905 +0000 UTC m=+133.933581416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.695504 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:44 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:44 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:44 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.695590 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:44 crc kubenswrapper[5123]: I1212 15:21:44.723489 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:44 crc kubenswrapper[5123]: E1212 15:21:44.724153 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:45.224113704 +0000 UTC m=+134.034066215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.018418 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.028135 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:45.527825201 +0000 UTC m=+134.337777712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.069289 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.069692 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-9j9pt"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.149817 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.150293 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:45.650205343 +0000 UTC m=+134.460157854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.257288 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.257712 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:45.757696501 +0000 UTC m=+134.567649002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.359571 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.359973 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:45.859949404 +0000 UTC m=+134.669901915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.431056 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" event={"ID":"5357a0b5-86ce-437b-b973-0bc2be3f85fd","Type":"ContainerStarted","Data":"73ca48617900b76408aac17b66546136a40cc960b0346a904dfe8b8aea2dd54d"} Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.466086 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.467253 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:45.967209335 +0000 UTC m=+134.777161846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.471577 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" event={"ID":"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f","Type":"ContainerStarted","Data":"2dff9d6196c75f1e458f2946594139685ed3243f50100e16767cf209b58d38d2"} Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.495606 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" event={"ID":"bf62556f-373c-41a0-96d4-8f431d629029","Type":"ContainerStarted","Data":"3084e90459e4b4a542b5b0abfe31502eea6f20c3a24630ee53f25e83535a882c"} Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.568371 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.569255 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.069189469 +0000 UTC m=+134.879141980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.671157 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.671821 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.171795975 +0000 UTC m=+134.981748486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.687529 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:45 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:45 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:45 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.687662 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.731835 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.734862 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kvxss"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.738160 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hmprz"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.741103 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.746889 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-vqqzf"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.749778 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7pgks"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.753710 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.755897 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.757770 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-96rdx"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.771957 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.772091 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.272054396 +0000 UTC m=+135.082006907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.772636 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.776980 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.276958569 +0000 UTC m=+135.086911080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: W1212 15:21:45.801397 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff811e4_3864_456b_8e00_b9e2d1c49ed8.slice/crio-2361263ab41e2623bcd43f11c343bc834af16a27f5998500037d1f0a4bb92033 WatchSource:0}: Error finding container 2361263ab41e2623bcd43f11c343bc834af16a27f5998500037d1f0a4bb92033: Status 404 returned error can't find the container with id 2361263ab41e2623bcd43f11c343bc834af16a27f5998500037d1f0a4bb92033 Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.835828 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gg4kh"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.858000 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.873748 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.874160 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.374133084 +0000 UTC m=+135.184085595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.878427 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.885683 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.888870 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.890988 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.897167 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.901381 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.910701 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hkjk6"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.926406 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" gracePeriod=30 Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.935129 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.953977 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-bmckw"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.961172 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rkcvb"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.971762 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6"] Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.975767 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:45 crc kubenswrapper[5123]: E1212 15:21:45.977064 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.477039549 +0000 UTC m=+135.286992150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:45 crc kubenswrapper[5123]: I1212 15:21:45.991610 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g9nc4"] Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.019724 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4"] Dec 12 15:21:46 crc kubenswrapper[5123]: W1212 15:21:46.039330 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda920b381_c5d3_4a28_92dc_c092a8ffeb69.slice/crio-eb64076bf008b3002ffc9994baee90fcf38795dc8a4e64c2509ca262451171cc WatchSource:0}: Error finding container eb64076bf008b3002ffc9994baee90fcf38795dc8a4e64c2509ca262451171cc: Status 404 returned error can't find the container with id eb64076bf008b3002ffc9994baee90fcf38795dc8a4e64c2509ca262451171cc Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.051637 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd"] Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.053126 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jd7j9"] Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.077576 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v"] Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.083020 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.083328 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.583289758 +0000 UTC m=+135.393242259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.093345 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.093865 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.593845097 +0000 UTC m=+135.403797608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.113058 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-qpxdh"] Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.115261 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt"] Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.182868 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm"] Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.197078 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.197590 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.697533806 +0000 UTC m=+135.507486317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.198010 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.199632 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.699617631 +0000 UTC m=+135.509570142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: W1212 15:21:46.256115 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-275dcaee8ec2dad5a96078d464f73e8d70906c91cd1779224bc8297d57273f97 WatchSource:0}: Error finding container 275dcaee8ec2dad5a96078d464f73e8d70906c91cd1779224bc8297d57273f97: Status 404 returned error can't find the container with id 275dcaee8ec2dad5a96078d464f73e8d70906c91cd1779224bc8297d57273f97 Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.299063 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.299442 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.799414485 +0000 UTC m=+135.609366996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.407809 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.408352 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:46.908332139 +0000 UTC m=+135.718284650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.493666 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55094: no serving certificate available for the kubelet" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.509570 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.509906 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.00987438 +0000 UTC m=+135.819826901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.518014 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"b3c33cd3d6b565d893e534d13411e8f7f653920ae15221812e255f85bb4e9228"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.558539 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" event={"ID":"632abe1b-1a43-457c-86db-62fdb0572a0e","Type":"ContainerStarted","Data":"a2a63b5930e7c067dde799029753b84917e7c469a12214814e9a59d235835982"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.565156 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" event={"ID":"9da0a55f-2526-45cc-b820-1b31ce63745c","Type":"ContainerStarted","Data":"1f457bc7ca4e634eea8b5eb356ff59c3b5c83a025167998d77a4a1a231263ca8"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.590986 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"275dcaee8ec2dad5a96078d464f73e8d70906c91cd1779224bc8297d57273f97"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.598679 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55096: no serving certificate available for the kubelet" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.604388 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" event={"ID":"5357a0b5-86ce-437b-b973-0bc2be3f85fd","Type":"ContainerStarted","Data":"5178362fa1182f9201d3253dfac726f5393470a0ec7712241f2ea10674db913b"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.608679 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hmprz" event={"ID":"e6c3a697-51e4-44dd-a38c-3287db85ce50","Type":"ContainerStarted","Data":"679b2a5e0b48cf8d7774f120476584a9f51ac1c3bd92810b6414d2fb0665914f"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.611406 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.612112 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.112095073 +0000 UTC m=+135.922047574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.617955 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" event={"ID":"bf62556f-373c-41a0-96d4-8f431d629029","Type":"ContainerStarted","Data":"375a45d9be110aa8622d28c0a201e3267a214e3ecf249ce0a329e603a08e0a31"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.619775 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-qpxdh" event={"ID":"9d4713bf-88da-43eb-8dd8-2808e76b53c4","Type":"ContainerStarted","Data":"0e2931ac6dddec73b719b401746bc9b50f7ae5f7c0e67d6cb9a7ae3ae4a6bf72"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.639101 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" event={"ID":"5e31e050-9a37-4e9b-8c0e-3fc2ed640421","Type":"ContainerStarted","Data":"7036ebf4f67b80fbdba8f6d70768b21b078c211fe1b5b44687566f2e64d55031"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.640846 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" event={"ID":"12e31d4b-fe5c-4f42-82f2-75389d8a34d6","Type":"ContainerStarted","Data":"0d4cdbce3dcf929a6628fb8d67ccad7f32e80e09788a217c96bf0d0d766180d6"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.641994 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" event={"ID":"b5bd3e23-721c-45a0-be10-620b5a281623","Type":"ContainerStarted","Data":"773e086b4118aeb0be1902b2696cf758f76b57d506d22382155cca060be24cee"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.643624 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" event={"ID":"9f45ad41-b75a-4549-a242-88e737cb7698","Type":"ContainerStarted","Data":"0a527aa0a014a9e65b49eb0d8dcda91e4f09de9dfd64962f59136d9313ef12f5"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.644620 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" event={"ID":"7b7460e4-e37e-4643-9956-8097d8258066","Type":"ContainerStarted","Data":"734b485fb9d423e0d3d32c510633b2f24dfd037770d861b060f7fb71b0061e6f"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.645652 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" event={"ID":"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd","Type":"ContainerStarted","Data":"2b99b6c04af3fd02b24ec89601d53fc549635c816fea84ef2148c93d7240ab27"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.652183 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-96rdx" event={"ID":"7ff811e4-3864-456b-8e00-b9e2d1c49ed8","Type":"ContainerStarted","Data":"2361263ab41e2623bcd43f11c343bc834af16a27f5998500037d1f0a4bb92033"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.662108 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" event={"ID":"f4afdf33-53ee-4eeb-83a3-a5a0dc656922","Type":"ContainerStarted","Data":"7140faafcd2247cbd6188b8af5aa428f03a13a045889d90558a09512a8f1048a"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.665537 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jd7j9" event={"ID":"626346f0-e585-4a37-8c9b-c6e36ee113bc","Type":"ContainerStarted","Data":"f5a0d6b2c6ef8fe6c140637973fb154130fb3b22bacb307a2014d1d785a8f985"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.667304 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" event={"ID":"788dd005-94a6-4a05-a0ce-c4dabe8dc04e","Type":"ContainerStarted","Data":"c39dd5480ccd2aed4e1d4368ce243d46152cc78975907bf481902e730d954c81"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.670116 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" event={"ID":"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e","Type":"ContainerStarted","Data":"987601d69d8a06267d454a9036abddb2c8d62bc2f057b722266077e9b5c61981"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.674096 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" event={"ID":"a920b381-c5d3-4a28-92dc-c092a8ffeb69","Type":"ContainerStarted","Data":"eb64076bf008b3002ffc9994baee90fcf38795dc8a4e64c2509ca262451171cc"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.676850 5123 generic.go:358] "Generic (PLEG): container finished" podID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerID="ef5ec51043ab59bd35282885fef9c882da8345339e8b5f08c9e667341adc54b7" exitCode=0 Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.676921 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" event={"ID":"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f","Type":"ContainerDied","Data":"ef5ec51043ab59bd35282885fef9c882da8345339e8b5f08c9e667341adc54b7"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.855299 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" event={"ID":"c4465de2-5e85-451d-a998-dcff71c6d37c","Type":"ContainerStarted","Data":"3a8e1ad4787b4dbc70707975a2240d26e7c4aa17123bfc16f3743df7363f2c36"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.855890 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.858347 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.358277361 +0000 UTC m=+136.168229892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.859716 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.860244 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:46 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:46 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:46 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.860359 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.861672 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.361655958 +0000 UTC m=+136.171608469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.861899 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.866516 5123 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-cqp44 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.866592 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" podUID="c4465de2-5e85-451d-a998-dcff71c6d37c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.870765 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" event={"ID":"17ce8feb-99e5-42f3-a808-2dd39bc57377","Type":"ContainerStarted","Data":"d3531403470831101241a9a7f0ebbf7cb9907bd1a1d83a7c259b96467ec19779"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.888509 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kr28r" podStartSLOduration=103.888482161 podStartE2EDuration="1m43.888482161s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:46.885442576 +0000 UTC m=+135.695395097" watchObservedRunningTime="2025-12-12 15:21:46.888482161 +0000 UTC m=+135.698434672" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.891853 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" event={"ID":"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154","Type":"ContainerStarted","Data":"a43a11dbe64edd57e86f405ec3fee142af87184eed652af2107307eea3ee48e3"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.893859 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55104: no serving certificate available for the kubelet" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.897886 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" event={"ID":"5254d27a-3c04-4921-b5e9-272cc901663d","Type":"ContainerStarted","Data":"ab94fac995cf502fbbcc762b42e1b2fbb533fb50fe7d5b36a02a8a99784d35ae"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.909488 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" event={"ID":"735555bc-661a-4a48-a615-c88944194992","Type":"ContainerStarted","Data":"5c301c1a54a3b1eb2ffcf53ef014903acefc61b7821b5fd3d12b682d19cd0735"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.930867 5123 generic.go:358] "Generic (PLEG): container finished" podID="c1681a2f-153f-44c0-901e-e85b401d30ee" containerID="07b5552f5fab61bda1ddf82f4c9d2105ed681233d697b33ac3c9e4da459feeea" exitCode=0 Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.931050 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" event={"ID":"c1681a2f-153f-44c0-901e-e85b401d30ee","Type":"ContainerDied","Data":"07b5552f5fab61bda1ddf82f4c9d2105ed681233d697b33ac3c9e4da459feeea"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.933003 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hkjk6" event={"ID":"439fab76-d95a-43fc-b800-b540d053001d","Type":"ContainerStarted","Data":"11806260e0e69455d0b9e368763ce5ada2be1549eaac20cbfd85a0422d4b415e"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.934283 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" event={"ID":"5ccaedd0-63de-4f5b-9106-b556e01fa2b8","Type":"ContainerStarted","Data":"25dbc077748cec36b1c8149f6634d3b1cf728585204f3e9f38627bf3abde4f3f"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.936080 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" event={"ID":"ae911826-fe03-4967-bdf1-f1eb5fc10ea4","Type":"ContainerStarted","Data":"72224406b80358371c16800a58322606ec07b12d38707eaf69a407f72598241f"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.939422 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" event={"ID":"7e490e0b-11da-4093-bd3c-a328ebd6e304","Type":"ContainerStarted","Data":"25b9df7f96e79c82c2ea337a8fbf986bcd2d15b4b9e22c681994b351ce0574af"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.940337 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" podStartSLOduration=102.940313813 podStartE2EDuration="1m42.940313813s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:46.940015004 +0000 UTC m=+135.749967525" watchObservedRunningTime="2025-12-12 15:21:46.940313813 +0000 UTC m=+135.750266334" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.947969 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" event={"ID":"68ef1469-eefc-4e7d-b8a5-bf0550b84694","Type":"ContainerStarted","Data":"1961a0fbbe66d7d604b36c4d3acf4dad3e6543c4fb52c65ebd21f177f53e6b3e"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.961828 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:46 crc kubenswrapper[5123]: E1212 15:21:46.962436 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.462396608 +0000 UTC m=+136.272349129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.970787 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" event={"ID":"286dff49-96d3-4c06-aa40-a4168098880e","Type":"ContainerStarted","Data":"702b7b9d8673c9e2aa18800e52bd68b71df642cdd1e59283f0266395c249a3da"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.982240 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" event={"ID":"d22355c6-2b0f-4caa-aa4b-92bd124103ad","Type":"ContainerStarted","Data":"393dd55ae30a90c69ddd121afb6e60c6322a7504549058c316a24e2ab4be8339"} Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.983477 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" podStartSLOduration=103.98344948 podStartE2EDuration="1m43.98344948s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:46.978721032 +0000 UTC m=+135.788673543" watchObservedRunningTime="2025-12-12 15:21:46.98344948 +0000 UTC m=+135.793401991" Dec 12 15:21:46 crc kubenswrapper[5123]: I1212 15:21:46.998673 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55120: no serving certificate available for the kubelet" Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.047261 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"72e83fe106bab96ce320ed1cf247a470413066ed9b483b5f1fce56eb68855b0a"} Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.052029 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" event={"ID":"77c05f1e-26be-4120-9eb2-0637d83f86af","Type":"ContainerStarted","Data":"911d8a70b5d1aa6873cc2a8657ee564c1bc4f73a84550c8dc00ff02ea351f9e3"} Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.054867 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" event={"ID":"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0","Type":"ContainerStarted","Data":"8649e8bf4ce4cc7f048e8cc20f4b07570fed80caa6bc7fabf83e767a51720665"} Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.071625 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" event={"ID":"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e","Type":"ContainerStarted","Data":"2bac9d6f021268ec984d65a0a3f55071e80df136b1a1062867ed74b539079ef3"} Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.095250 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" event={"ID":"d9b2cf1e-7b13-44dc-8819-74f4bd24c609","Type":"ContainerStarted","Data":"203589d8f14400fa0f464fbea94f2910a26847186c9a5e6f178d26dba28261cf"} Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.098086 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.598062464 +0000 UTC m=+136.408014975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.097152 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.103932 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55130: no serving certificate available for the kubelet" Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.202630 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" podStartSLOduration=103.20260192 podStartE2EDuration="1m43.20260192s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:47.201619209 +0000 UTC m=+136.011571740" watchObservedRunningTime="2025-12-12 15:21:47.20260192 +0000 UTC m=+136.012554431" Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.203156 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.703125947 +0000 UTC m=+136.513078458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.202672 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.204074 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.204618 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.704609553 +0000 UTC m=+136.514562064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.288678 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55144: no serving certificate available for the kubelet" Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.306980 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.308746 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.808717285 +0000 UTC m=+136.618669806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.432249 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.433924 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:47.933894958 +0000 UTC m=+136.743847469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.513758 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55146: no serving certificate available for the kubelet" Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.799976 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.812997 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:48.312958712 +0000 UTC m=+137.122911233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.818538 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:47 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:47 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:47 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.818666 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.824599 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:48.324571167 +0000 UTC m=+137.134523678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.823887 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.875120 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55148: no serving certificate available for the kubelet" Dec 12 15:21:47 crc kubenswrapper[5123]: I1212 15:21:47.929724 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:47 crc kubenswrapper[5123]: E1212 15:21:47.930304 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:48.430276688 +0000 UTC m=+137.240229199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.133598 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:48 crc kubenswrapper[5123]: E1212 15:21:48.134259 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:48.634238108 +0000 UTC m=+137.444190619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.246489 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:48 crc kubenswrapper[5123]: E1212 15:21:48.246909 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:48.746878688 +0000 UTC m=+137.556831209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.247433 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:48 crc kubenswrapper[5123]: E1212 15:21:48.247953 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:48.74792828 +0000 UTC m=+137.557880791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.284275 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.420904 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:48 crc kubenswrapper[5123]: E1212 15:21:48.423087 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:48.923058115 +0000 UTC m=+137.733010626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.523556 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:48 crc kubenswrapper[5123]: E1212 15:21:48.524305 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.024277906 +0000 UTC m=+137.834230417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.531339 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" event={"ID":"7e490e0b-11da-4093-bd3c-a328ebd6e304","Type":"ContainerStarted","Data":"4e25531f963841e942553a012c166b2803d241c17dda294f6ef82b28abb36498"} Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.578078 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" event={"ID":"d22355c6-2b0f-4caa-aa4b-92bd124103ad","Type":"ContainerStarted","Data":"163dad4eae982a7fbe3611f84cb5a376e8bedd2fa6b19fe25ea21dd640ad65ce"} Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.644069 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:48 crc kubenswrapper[5123]: E1212 15:21:48.645088 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.145051791 +0000 UTC m=+137.955004302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:48 crc kubenswrapper[5123]: I1212 15:21:48.656530 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qpz79" event={"ID":"45c4bae4-fd5a-46dc-b8ea-0915b2c5789e","Type":"ContainerStarted","Data":"76ef2c697018692b82ed827e8b1c660169c2584addd53a40427bc946da0996c5"} Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.032469 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:49 crc kubenswrapper[5123]: E1212 15:21:49.032884 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.532864779 +0000 UTC m=+138.342817290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.034707 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" event={"ID":"9da0a55f-2526-45cc-b820-1b31ce63745c","Type":"ContainerStarted","Data":"5d3099286f7fd25b3104336aaf7c27da1dad367a5c12eb074905fdaf34882398"} Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.036077 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.039148 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hmprz" event={"ID":"e6c3a697-51e4-44dd-a38c-3287db85ce50","Type":"ContainerStarted","Data":"ff4937f2b09b7993da1a0693383d92a8f5a2c9e501ed912a19a1f2c769687a27"} Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.041411 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" event={"ID":"5e31e050-9a37-4e9b-8c0e-3fc2ed640421","Type":"ContainerStarted","Data":"387310541c8d2cb8157204b6a9f3ee2df1cdc71aa77a1edde3852318b1b323d7"} Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.044957 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" event={"ID":"9f45ad41-b75a-4549-a242-88e737cb7698","Type":"ContainerStarted","Data":"c466c184aeb39284fc51e2160a69936dd17cdc3f6201b507491268c1bfb3a586"} Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.127637 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55150: no serving certificate available for the kubelet" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.142426 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:49 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:49 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:49 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.143160 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:49 crc kubenswrapper[5123]: E1212 15:21:49.143639 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.643608669 +0000 UTC m=+138.453561180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.143687 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:49 crc kubenswrapper[5123]: E1212 15:21:49.145566 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.64554891 +0000 UTC m=+138.455501421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.143431 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.153705 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podStartSLOduration=105.153635894 podStartE2EDuration="1m45.153635894s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:49.152018544 +0000 UTC m=+137.961971065" watchObservedRunningTime="2025-12-12 15:21:49.153635894 +0000 UTC m=+137.963588405" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.153911 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.154006 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.166752 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-96rdx" event={"ID":"7ff811e4-3864-456b-8e00-b9e2d1c49ed8","Type":"ContainerStarted","Data":"a7386e1951a41d2dc0f62619ee6a87abfde0e3effb9afbe4f08906b810da3bc9"} Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.219833 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4hvhp" podStartSLOduration=105.219810753 podStartE2EDuration="1m45.219810753s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:49.217664076 +0000 UTC m=+138.027616607" watchObservedRunningTime="2025-12-12 15:21:49.219810753 +0000 UTC m=+138.029763264" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.223113 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" event={"ID":"f4afdf33-53ee-4eeb-83a3-a5a0dc656922","Type":"ContainerStarted","Data":"5075713a07f28fc134f083c4aef4b6b9063b26821caa7a98444c910d691d2a79"} Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.224679 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.245275 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:49 crc kubenswrapper[5123]: E1212 15:21:49.247488 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.747454392 +0000 UTC m=+138.557406913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.307017 5123 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-hznms container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.307090 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" podUID="f4afdf33-53ee-4eeb-83a3-a5a0dc656922" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.308457 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9t6q7" podStartSLOduration=105.308433109 podStartE2EDuration="1m45.308433109s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:49.308026016 +0000 UTC m=+138.117978527" watchObservedRunningTime="2025-12-12 15:21:49.308433109 +0000 UTC m=+138.118385630" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.313740 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.313822 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.349922 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:49 crc kubenswrapper[5123]: E1212 15:21:49.352281 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.852247476 +0000 UTC m=+138.662199987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.356811 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" podStartSLOduration=105.356781098 podStartE2EDuration="1m45.356781098s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:49.349711096 +0000 UTC m=+138.159663617" watchObservedRunningTime="2025-12-12 15:21:49.356781098 +0000 UTC m=+138.166733609" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.386163 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-96rdx" podStartSLOduration=105.386137421 podStartE2EDuration="1m45.386137421s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:49.383718485 +0000 UTC m=+138.193671016" watchObservedRunningTime="2025-12-12 15:21:49.386137421 +0000 UTC m=+138.196089932" Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.454793 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:49 crc kubenswrapper[5123]: E1212 15:21:49.455543 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:49.95549132 +0000 UTC m=+138.765443841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:49 crc kubenswrapper[5123]: I1212 15:21:49.972251 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:49 crc kubenswrapper[5123]: E1212 15:21:49.972809 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:50.972784537 +0000 UTC m=+139.782737048 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.023469 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:50 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:50 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:50 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.023563 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.161078 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" event={"ID":"6bf5e136-4d51-49ba-bb1f-3e4fd5c82154","Type":"ContainerStarted","Data":"467221fac7d8fad0b6cb2434f28497c340a8f01154e2eee0f6e0d9cb8efe4613"} Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.161156 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.161173 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.170760 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.193878 5123 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-c6l4m container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.193970 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" podUID="6bf5e136-4d51-49ba-bb1f-3e4fd5c82154" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 15:21:50 crc kubenswrapper[5123]: E1212 15:21:50.271402 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:50.771365071 +0000 UTC m=+139.581317592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.278299 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:50 crc kubenswrapper[5123]: E1212 15:21:50.278947 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:50.778923788 +0000 UTC m=+139.588876309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.282559 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.282599 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.282691 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.282733 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.282867 5123 patch_prober.go:28] interesting pod/console-64d44f6ddf-96rdx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.282904 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-96rdx" podUID="7ff811e4-3864-456b-8e00-b9e2d1c49ed8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.537595 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.575084 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" podStartSLOduration=106.575046634 podStartE2EDuration="1m46.575046634s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:50.263262586 +0000 UTC m=+139.073215097" watchObservedRunningTime="2025-12-12 15:21:50.575046634 +0000 UTC m=+139.384999145" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.575626 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.575401 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:50 crc kubenswrapper[5123]: E1212 15:21:50.575979 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:51.075955173 +0000 UTC m=+139.885907674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.690836 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:50 crc kubenswrapper[5123]: E1212 15:21:50.691328 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:51.191299528 +0000 UTC m=+140.001252049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.715704 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55154: no serving certificate available for the kubelet" Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.722277 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:50 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:50 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:50 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:50 crc kubenswrapper[5123]: I1212 15:21:50.722367 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.718978 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:51 crc kubenswrapper[5123]: E1212 15:21:51.719451 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:52.719416388 +0000 UTC m=+141.529368909 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.731553 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:51 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:51 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:51 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.731657 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.802467 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" event={"ID":"17ce8feb-99e5-42f3-a808-2dd39bc57377","Type":"ContainerStarted","Data":"e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15"} Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.807069 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hkjk6" event={"ID":"439fab76-d95a-43fc-b800-b540d053001d","Type":"ContainerStarted","Data":"1774841f61a972d2f80f11a076970acce7798532cf793dfaef0c0f3beb625b9e"} Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.811195 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" event={"ID":"ae911826-fe03-4967-bdf1-f1eb5fc10ea4","Type":"ContainerStarted","Data":"0925c7922fa57299770d3ce08731b7fc689104a909dcc05c64eb34eed581ecf3"} Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.820875 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" event={"ID":"d9b2cf1e-7b13-44dc-8819-74f4bd24c609","Type":"ContainerStarted","Data":"3a0861ccc296a09385fe44776ecfdb0a648d3f07e70d2b8aab4b139cdf192200"} Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.826616 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:51 crc kubenswrapper[5123]: E1212 15:21:51.832719 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:52.33269746 +0000 UTC m=+141.142649971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.839890 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" event={"ID":"b5bd3e23-721c-45a0-be10-620b5a281623","Type":"ContainerStarted","Data":"a41f74ae6f8783477d1348ed6fefef1bb35aacc6b149115cd93af54c60b6a005"} Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.851056 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" event={"ID":"7b7460e4-e37e-4643-9956-8097d8258066","Type":"ContainerStarted","Data":"fcafd3939d9563ab5f9a4269f39dc96e6d7dcb604291568e195663372f135519"} Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.854914 5123 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-hznms container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.855027 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" podUID="f4afdf33-53ee-4eeb-83a3-a5a0dc656922" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.856955 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.860991 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.861124 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.865522 5123 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pj4ts container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.865631 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" podUID="7b7460e4-e37e-4643-9956-8097d8258066" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.869838 5123 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-hznms container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Dec 12 15:21:51 crc kubenswrapper[5123]: I1212 15:21:51.869928 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" podUID="f4afdf33-53ee-4eeb-83a3-a5a0dc656922" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.454359 5123 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pj4ts container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.454664 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" podUID="7b7460e4-e37e-4643-9956-8097d8258066" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.454807 5123 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-c6l4m container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.454826 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" podUID="6bf5e136-4d51-49ba-bb1f-3e4fd5c82154" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.454932 5123 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-c6l4m container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.455068 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" podUID="6bf5e136-4d51-49ba-bb1f-3e4fd5c82154" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.456814 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.457306 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:53.457254926 +0000 UTC m=+142.267207467 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.462015 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.462409 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.491389 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.509635 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.518544 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.518651 5123 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.560810 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.563330 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:53.063275427 +0000 UTC m=+141.873227948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.662996 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.663473 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:53.163455025 +0000 UTC m=+141.973407536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.837465 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.837761 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:53.337732592 +0000 UTC m=+142.147685103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.944639 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:52 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:52 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:52 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.944740 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.946554 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:52 crc kubenswrapper[5123]: E1212 15:21:52.947543 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:53.447516733 +0000 UTC m=+142.257469244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.978787 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" event={"ID":"77c05f1e-26be-4120-9eb2-0637d83f86af","Type":"ContainerStarted","Data":"ea9cde2026aaafe0b7d4501f55613ebd7cd93b00912c33b05783a0cc611fc0ef"} Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.992423 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.992579 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.993421 5123 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-hznms container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.993477 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" podUID="f4afdf33-53ee-4eeb-83a3-a5a0dc656922" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.993558 5123 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-rkcvb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.993590 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.994901 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.998767 5123 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pj4ts container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 12 15:21:52 crc kubenswrapper[5123]: I1212 15:21:52.998848 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" podUID="7b7460e4-e37e-4643-9956-8097d8258066" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 12 15:21:53 crc kubenswrapper[5123]: I1212 15:21:53.048205 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:53 crc kubenswrapper[5123]: E1212 15:21:53.048603 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:53.548573639 +0000 UTC m=+142.358526170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:53 crc kubenswrapper[5123]: I1212 15:21:53.151189 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:53 crc kubenswrapper[5123]: E1212 15:21:53.160463 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:53.660426804 +0000 UTC m=+142.470379315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:53 crc kubenswrapper[5123]: I1212 15:21:53.794178 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:53 crc kubenswrapper[5123]: E1212 15:21:53.798570 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:54.798520088 +0000 UTC m=+143.608472609 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:53.978331 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:54 crc kubenswrapper[5123]: E1212 15:21:53.978886 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:54.478859864 +0000 UTC m=+143.288812375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:53.993816 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:54 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:54 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:54 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:53.993905 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.370607 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:54 crc kubenswrapper[5123]: E1212 15:21:54.371133 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:54.871110391 +0000 UTC m=+143.681062902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.574343 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:54 crc kubenswrapper[5123]: E1212 15:21:54.574901 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:55.074871144 +0000 UTC m=+143.884823655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.676295 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:54 crc kubenswrapper[5123]: E1212 15:21:54.676790 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:55.176769607 +0000 UTC m=+143.986722118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.702579 5123 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-rkcvb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.703151 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.702740 5123 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pj4ts container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.703457 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" podUID="7b7460e4-e37e-4643-9956-8097d8258066" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.756003 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:54 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:54 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:54 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.756087 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:54 crc kubenswrapper[5123]: E1212 15:21:54.784244 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:55.284202774 +0000 UTC m=+144.094155285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.784093 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:54 crc kubenswrapper[5123]: I1212 15:21:54.784588 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:54 crc kubenswrapper[5123]: E1212 15:21:54.785986 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:55.285973139 +0000 UTC m=+144.095925650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:55 crc kubenswrapper[5123]: I1212 15:21:55.248845 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:55 crc kubenswrapper[5123]: E1212 15:21:55.249276 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:55.749247889 +0000 UTC m=+144.559200400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:55 crc kubenswrapper[5123]: I1212 15:21:55.411308 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:55 crc kubenswrapper[5123]: E1212 15:21:55.411703 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:55.911683884 +0000 UTC m=+144.721636395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:55 crc kubenswrapper[5123]: I1212 15:21:55.908442 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:55 crc kubenswrapper[5123]: E1212 15:21:55.909159 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:56.409133786 +0000 UTC m=+145.219086297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:55 crc kubenswrapper[5123]: I1212 15:21:55.933298 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:55 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:55 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:55 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:55 crc kubenswrapper[5123]: I1212 15:21:55.933403 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:55 crc kubenswrapper[5123]: I1212 15:21:55.959019 5123 ???:1] "http: TLS handshake error from 192.168.126.11:55168: no serving certificate available for the kubelet" Dec 12 15:21:55 crc kubenswrapper[5123]: I1212 15:21:55.999082 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.011933 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.012512 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:56.512492174 +0000 UTC m=+145.322444685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.111390 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" event={"ID":"19fca7bf-f8d6-4e7c-b54d-e98292eb7efd","Type":"ContainerStarted","Data":"6f1b121268d1998eb6a354c8597f26570797558658022a0ab08ce66de5254747"} Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.112736 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.113074 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:56.613050225 +0000 UTC m=+145.423002746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.130835 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" event={"ID":"a920b381-c5d3-4a28-92dc-c092a8ffeb69","Type":"ContainerStarted","Data":"ad8f9dfaf535fdb12b1da84374536d40f5930460ac8919c8100264792f45252e"} Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.215650 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.218961 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:56.718825278 +0000 UTC m=+145.528777789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.337705 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.338157 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:56.838128788 +0000 UTC m=+145.648081299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.340047 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.342273 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:56.842248147 +0000 UTC m=+145.652200658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.441926 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.442921 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:56.94289086 +0000 UTC m=+145.752843371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.546101 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.546569 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:57.046554208 +0000 UTC m=+145.856506719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.665885 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.666331 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:57.16627822 +0000 UTC m=+145.976230731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.698950 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:56 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:56 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:56 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.699115 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.768861 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:56 crc kubenswrapper[5123]: E1212 15:21:56.769353 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:57.26933343 +0000 UTC m=+146.079285941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.798989 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:56 crc kubenswrapper[5123]: I1212 15:21:56.836159 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7pgks" podStartSLOduration=112.836127699 podStartE2EDuration="1m52.836127699s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:56.826571748 +0000 UTC m=+145.636524259" watchObservedRunningTime="2025-12-12 15:21:56.836127699 +0000 UTC m=+145.646080210" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.266323 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1681a2f-153f-44c0-901e-e85b401d30ee-config-volume\") pod \"c1681a2f-153f-44c0-901e-e85b401d30ee\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.266457 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrnwb\" (UniqueName: \"kubernetes.io/projected/c1681a2f-153f-44c0-901e-e85b401d30ee-kube-api-access-rrnwb\") pod \"c1681a2f-153f-44c0-901e-e85b401d30ee\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.266614 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1681a2f-153f-44c0-901e-e85b401d30ee-secret-volume\") pod \"c1681a2f-153f-44c0-901e-e85b401d30ee\" (UID: \"c1681a2f-153f-44c0-901e-e85b401d30ee\") " Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.266849 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:57 crc kubenswrapper[5123]: E1212 15:21:57.267188 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:57.767158594 +0000 UTC m=+146.577111105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.277929 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1681a2f-153f-44c0-901e-e85b401d30ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "c1681a2f-153f-44c0-901e-e85b401d30ee" (UID: "c1681a2f-153f-44c0-901e-e85b401d30ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.374584 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1681a2f-153f-44c0-901e-e85b401d30ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c1681a2f-153f-44c0-901e-e85b401d30ee" (UID: "c1681a2f-153f-44c0-901e-e85b401d30ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.375420 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.375512 5123 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1681a2f-153f-44c0-901e-e85b401d30ee-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.375528 5123 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1681a2f-153f-44c0-901e-e85b401d30ee-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:57 crc kubenswrapper[5123]: E1212 15:21:57.375857 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:57.87584228 +0000 UTC m=+146.685794791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.376534 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1681a2f-153f-44c0-901e-e85b401d30ee-kube-api-access-rrnwb" (OuterVolumeSpecName: "kube-api-access-rrnwb") pod "c1681a2f-153f-44c0-901e-e85b401d30ee" (UID: "c1681a2f-153f-44c0-901e-e85b401d30ee"). InnerVolumeSpecName "kube-api-access-rrnwb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.391497 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" podStartSLOduration=113.391471231 podStartE2EDuration="1m53.391471231s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:57.375424657 +0000 UTC m=+146.185377368" watchObservedRunningTime="2025-12-12 15:21:57.391471231 +0000 UTC m=+146.201423742" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.477932 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.478876 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrnwb\" (UniqueName: \"kubernetes.io/projected/c1681a2f-153f-44c0-901e-e85b401d30ee-kube-api-access-rrnwb\") on node \"crc\" DevicePath \"\"" Dec 12 15:21:57 crc kubenswrapper[5123]: E1212 15:21:57.479070 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:57.979027803 +0000 UTC m=+146.788980314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.602051 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:57 crc kubenswrapper[5123]: E1212 15:21:57.602774 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:58.10272674 +0000 UTC m=+146.912679251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.763568 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:57 crc kubenswrapper[5123]: E1212 15:21:57.764015 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:58.263980737 +0000 UTC m=+147.073933248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.839514 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:57 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:57 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:57 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.839600 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.864994 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:57 crc kubenswrapper[5123]: E1212 15:21:57.865579 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:58.36555153 +0000 UTC m=+147.175504041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.868474 5123 generic.go:358] "Generic (PLEG): container finished" podID="735555bc-661a-4a48-a615-c88944194992" containerID="d5a9349859054f9dc9d1d5cb2cec97bdb4f34ca46bb54093b81acc4744e28a39" exitCode=0 Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.868604 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" event={"ID":"735555bc-661a-4a48-a615-c88944194992","Type":"ContainerDied","Data":"d5a9349859054f9dc9d1d5cb2cec97bdb4f34ca46bb54093b81acc4744e28a39"} Dec 12 15:21:57 crc kubenswrapper[5123]: I1212 15:21:57.968579 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:57 crc kubenswrapper[5123]: E1212 15:21:57.970450 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:58.470416265 +0000 UTC m=+147.280368776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:57.997806 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hkjk6" podStartSLOduration=31.997772085 podStartE2EDuration="31.997772085s" podCreationTimestamp="2025-12-12 15:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:57.880562592 +0000 UTC m=+146.690515123" watchObservedRunningTime="2025-12-12 15:21:57.997772085 +0000 UTC m=+146.807724596" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.022454 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dtknn" podStartSLOduration=114.02243026 podStartE2EDuration="1m54.02243026s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:58.021272973 +0000 UTC m=+146.831225504" watchObservedRunningTime="2025-12-12 15:21:58.02243026 +0000 UTC m=+146.832382771" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.065571 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" podStartSLOduration=114.065541495 podStartE2EDuration="1m54.065541495s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:58.057942625 +0000 UTC m=+146.867895156" watchObservedRunningTime="2025-12-12 15:21:58.065541495 +0000 UTC m=+146.875494006" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.070998 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:58 crc kubenswrapper[5123]: E1212 15:21:58.071766 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:58.571745569 +0000 UTC m=+147.381698090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.073955 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" event={"ID":"c1681a2f-153f-44c0-901e-e85b401d30ee","Type":"ContainerDied","Data":"b5bc3f05fcb1923b781ed66d910c11a30a8f551e09130d4b1a72be67fcfd51e5"} Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.074145 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5bc3f05fcb1923b781ed66d910c11a30a8f551e09130d4b1a72be67fcfd51e5" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.082462 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425875-lxsft" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.109996 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" event={"ID":"286dff49-96d3-4c06-aa40-a4168098880e","Type":"ContainerStarted","Data":"703f5b177a946bc192ce9daf32b982400f4905156d2994c5a7858415bb6678cc"} Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.110106 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tmds4" podStartSLOduration=114.110015832 podStartE2EDuration="1m54.110015832s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:58.108395271 +0000 UTC m=+146.918347802" watchObservedRunningTime="2025-12-12 15:21:58.110015832 +0000 UTC m=+146.919968373" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.194877 5123 generic.go:358] "Generic (PLEG): container finished" podID="e077c741-1ed0-4ffa-80a7-6ce54aab5fe0" containerID="4308cec8f444f7ea1b805f58237fe97c4130aa351a408e2c0a2dff8ffe5857fb" exitCode=0 Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.195328 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" event={"ID":"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0","Type":"ContainerDied","Data":"4308cec8f444f7ea1b805f58237fe97c4130aa351a408e2c0a2dff8ffe5857fb"} Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.196250 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.196276 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-dtw8v" podStartSLOduration=114.196245662 podStartE2EDuration="1m54.196245662s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:58.144766284 +0000 UTC m=+146.954718795" watchObservedRunningTime="2025-12-12 15:21:58.196245662 +0000 UTC m=+147.006198173" Dec 12 15:21:58 crc kubenswrapper[5123]: E1212 15:21:58.196747 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:58.696722077 +0000 UTC m=+147.506674588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.210871 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-qpxdh" event={"ID":"9d4713bf-88da-43eb-8dd8-2808e76b53c4","Type":"ContainerStarted","Data":"83ff81de63ee51b376c82e048226a4df79bb47929506256b460e20811f478307"} Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.402607 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" event={"ID":"12e31d4b-fe5c-4f42-82f2-75389d8a34d6","Type":"ContainerStarted","Data":"ce6123ede18e2236d022469a70f35a1d2a51777586449d53ce6c8d0fc908f608"} Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.425774 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:58 crc kubenswrapper[5123]: E1212 15:21:58.428822 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:58.92879373 +0000 UTC m=+147.738746251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.466879 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" event={"ID":"ea13a1f7-48ed-40f9-b5d0-040f13d8f90e","Type":"ContainerStarted","Data":"f51300007403d19caaa888a54ecc0aed8896e56c9aa33a28e4ef2d29330d99c8"} Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.527960 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:58 crc kubenswrapper[5123]: E1212 15:21:58.528463 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.028432771 +0000 UTC m=+147.838385282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.743396 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:58 crc kubenswrapper[5123]: E1212 15:21:58.745265 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.245245116 +0000 UTC m=+148.055197627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.754368 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:58 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:58 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:58 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.754461 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.833337 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lx5l5" podStartSLOduration=114.833311202 podStartE2EDuration="1m54.833311202s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:58.818068764 +0000 UTC m=+147.628021275" watchObservedRunningTime="2025-12-12 15:21:58.833311202 +0000 UTC m=+147.643263713" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.834413 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-qpxdh" podStartSLOduration=114.834398837 podStartE2EDuration="1m54.834398837s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:58.741294681 +0000 UTC m=+147.551247212" watchObservedRunningTime="2025-12-12 15:21:58.834398837 +0000 UTC m=+147.644351358" Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.856426 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:58 crc kubenswrapper[5123]: E1212 15:21:58.856848 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.356821232 +0000 UTC m=+148.166773743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:58 crc kubenswrapper[5123]: I1212 15:21:58.958276 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:58 crc kubenswrapper[5123]: E1212 15:21:58.959610 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.459589951 +0000 UTC m=+148.269542462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.060053 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:59 crc kubenswrapper[5123]: E1212 15:21:59.060382 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.560332387 +0000 UTC m=+148.370284898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.060831 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:59 crc kubenswrapper[5123]: E1212 15:21:59.061443 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.561432932 +0000 UTC m=+148.371385443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.209638 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:59 crc kubenswrapper[5123]: E1212 15:21:59.209915 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.709891198 +0000 UTC m=+148.519843709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.243491 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.243570 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.393891 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.394160 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-76wqd" podStartSLOduration=115.394130557 podStartE2EDuration="1m55.394130557s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:21:58.948760811 +0000 UTC m=+147.758713322" watchObservedRunningTime="2025-12-12 15:21:59.394130557 +0000 UTC m=+148.204083068" Dec 12 15:21:59 crc kubenswrapper[5123]: E1212 15:21:59.394433 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.894409756 +0000 UTC m=+148.704362267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.495327 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:59 crc kubenswrapper[5123]: E1212 15:21:59.495627 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:21:59.995600057 +0000 UTC m=+148.805552568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.592975 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"329d62f6c7887e9dd6bef53c4b2648f540b08b158e5d9f1b243b304e320d0052"} Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.600198 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:21:59 crc kubenswrapper[5123]: E1212 15:21:59.600638 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:00.100621287 +0000 UTC m=+148.910573798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.750401 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:21:59 crc kubenswrapper[5123]: E1212 15:21:59.751651 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:00.251624692 +0000 UTC m=+149.061577203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.768356 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:21:59 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:21:59 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:21:59 crc kubenswrapper[5123]: healthz check failed Dec 12 15:21:59 crc kubenswrapper[5123]: I1212 15:21:59.768454 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.557671 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:00 crc kubenswrapper[5123]: E1212 15:22:00.558083 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:01.558054655 +0000 UTC m=+150.368007166 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.584612 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.584802 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.600456 5123 patch_prober.go:28] interesting pod/console-64d44f6ddf-96rdx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.600543 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-96rdx" podUID="7ff811e4-3864-456b-8e00-b9e2d1c49ed8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.660694 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:00 crc kubenswrapper[5123]: E1212 15:22:00.661158 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:01.161141205 +0000 UTC m=+149.971093716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.695782 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:00 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:00 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:00 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.695947 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:00 crc kubenswrapper[5123]: I1212 15:22:00.796649 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:00 crc kubenswrapper[5123]: E1212 15:22:00.797387 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:01.297357736 +0000 UTC m=+150.107310247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:01 crc kubenswrapper[5123]: E1212 15:22:00.933412 5123 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.249s" Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.059986 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:01 crc kubenswrapper[5123]: E1212 15:22:01.060512 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:01.560496226 +0000 UTC m=+150.370448737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.123172 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" event={"ID":"788dd005-94a6-4a05-a0ce-c4dabe8dc04e","Type":"ContainerStarted","Data":"194b28941ac2624793649a4f5cb3a324b244d3872152273f1dc8fc4b0828cac5"} Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.291397 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:01 crc kubenswrapper[5123]: E1212 15:22:01.292674 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:01.792629271 +0000 UTC m=+150.602581792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.522187 5123 ???:1] "http: TLS handshake error from 192.168.126.11:52854: no serving certificate available for the kubelet" Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.524019 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:01 crc kubenswrapper[5123]: E1212 15:22:01.524685 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:02.024646823 +0000 UTC m=+150.834599334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.726603 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:01 crc kubenswrapper[5123]: E1212 15:22:01.727040 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:02.227009363 +0000 UTC m=+151.036961894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.763115 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:01 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:01 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:01 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.839391 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.835491 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:01 crc kubenswrapper[5123]: E1212 15:22:01.836366 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:02.336345499 +0000 UTC m=+151.146298010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.868703 5123 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-hznms container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Dec 12 15:22:01 crc kubenswrapper[5123]: I1212 15:22:01.869649 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" podUID="f4afdf33-53ee-4eeb-83a3-a5a0dc656922" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.001464 5123 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-rkcvb container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.001589 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.008454 5123 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pj4ts container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.008604 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" podUID="7b7460e4-e37e-4643-9956-8097d8258066" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.009956 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.011299 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:02.511264915 +0000 UTC m=+151.321217426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.022410 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.022512 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.120999 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.175648 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c6l4m" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.288581 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"bde49a6b6c56bd3ddaaba8632eb9c042c12096bae92b5664fca379fc9e4882c2"} Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.319339 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" event={"ID":"632abe1b-1a43-457c-86db-62fdb0572a0e","Type":"ContainerStarted","Data":"f4f07514ec7ed906a6dd0cc155ea6efff3990f863022eb999af77197042ed93a"} Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.438654 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:02.938594385 +0000 UTC m=+151.748546906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.440135 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.440766 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:02.940731562 +0000 UTC m=+151.750684083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.461688 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.465693 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sbt5r"] Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.466407 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1681a2f-153f-44c0-901e-e85b401d30ee" containerName="collect-profiles" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.466440 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1681a2f-153f-44c0-901e-e85b401d30ee" containerName="collect-profiles" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.466631 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1681a2f-153f-44c0-901e-e85b401d30ee" containerName="collect-profiles" Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.468595 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.470549 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.470612 5123 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.545388 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.545747 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:03.045729972 +0000 UTC m=+151.855682483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.737116 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.737523 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:03.237496099 +0000 UTC m=+152.047448610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.753090 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:02 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:02 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:02 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.753433 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:02 crc kubenswrapper[5123]: I1212 15:22:02.868772 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:02 crc kubenswrapper[5123]: E1212 15:22:02.869313 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:03.36928625 +0000 UTC m=+152.179238821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.048246 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.048631 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:03.548597495 +0000 UTC m=+152.358550006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.099149 5123 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-hznms container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.099291 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" podUID="f4afdf33-53ee-4eeb-83a3-a5a0dc656922" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.116369 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.116465 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.222028 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.222393 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:03.722378147 +0000 UTC m=+152.532330668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.345860 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.346271 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:03.846240539 +0000 UTC m=+152.656193050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.447628 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.448269 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:03.948245355 +0000 UTC m=+152.758197866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.494022 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hmprz" podStartSLOduration=119.493989133 podStartE2EDuration="1m59.493989133s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:03.481113217 +0000 UTC m=+152.291065748" watchObservedRunningTime="2025-12-12 15:22:03.493989133 +0000 UTC m=+152.303941644" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.601684 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.602107 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:04.102082159 +0000 UTC m=+152.912034670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.744542 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.749133 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.756436 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:04.256386609 +0000 UTC m=+153.066339120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.813923 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.823919 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:03 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:03 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:03 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.824012 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.832334 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hmprz" event={"ID":"e6c3a697-51e4-44dd-a38c-3287db85ce50","Type":"ContainerStarted","Data":"0cc20090d67e0f5d543290fb421a58e1f8e9908a67b405f87e2b12d11e143152"} Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.837369 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.856755 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.856957 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-utilities\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.856993 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-catalog-content\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.857108 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db5kh\" (UniqueName: \"kubernetes.io/projected/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-kube-api-access-db5kh\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.857608 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:04.357564329 +0000 UTC m=+153.167516840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.961974 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-db5kh\" (UniqueName: \"kubernetes.io/projected/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-kube-api-access-db5kh\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.962074 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.962160 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-utilities\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.962188 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-catalog-content\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: E1212 15:22:03.963146 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:04.463124046 +0000 UTC m=+153.273076557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.963183 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-catalog-content\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:03 crc kubenswrapper[5123]: I1212 15:22:03.963324 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-utilities\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.088002 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.088830 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:04.588795325 +0000 UTC m=+153.398747846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.135124 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" podStartSLOduration=120.135097181 podStartE2EDuration="2m0.135097181s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:04.128646698 +0000 UTC m=+152.938599219" watchObservedRunningTime="2025-12-12 15:22:04.135097181 +0000 UTC m=+152.945049712" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.170719 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-db5kh\" (UniqueName: \"kubernetes.io/projected/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-kube-api-access-db5kh\") pod \"community-operators-sbt5r\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.172084 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" podStartSLOduration=120.172057601 podStartE2EDuration="2m0.172057601s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:04.171315189 +0000 UTC m=+152.981267700" watchObservedRunningTime="2025-12-12 15:22:04.172057601 +0000 UTC m=+152.982010112" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.190656 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.191119 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:04.691101851 +0000 UTC m=+153.501054372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.230536 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podStartSLOduration=120.230502729 podStartE2EDuration="2m0.230502729s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:04.227238266 +0000 UTC m=+153.037190777" watchObservedRunningTime="2025-12-12 15:22:04.230502729 +0000 UTC m=+153.040455240" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.408930 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.410034 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:04.909996549 +0000 UTC m=+153.719949060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.418434 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" event={"ID":"12e31d4b-fe5c-4f42-82f2-75389d8a34d6","Type":"ContainerStarted","Data":"620b4a90e164b01ed7bb34b2f26767a42844c9527ecf7f6ff2af9d1f17dfd84a"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.418495 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fjqk7"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.518166 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.526596 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.526803 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.527184 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.527659 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:05.027641746 +0000 UTC m=+153.837594267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.554077 5123 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-dc699 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.554189 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559240 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbt5r"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559313 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559331 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gg4kh" event={"ID":"b5bd3e23-721c-45a0-be10-620b5a281623","Type":"ContainerStarted","Data":"f782cb31120f44a5df0c4ebc9d5f1b53a35ad9a5dc29b9b43aea33098aa5240d"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559360 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jd7j9" event={"ID":"626346f0-e585-4a37-8c9b-c6e36ee113bc","Type":"ContainerStarted","Data":"46911a9994b39e11d9cc3a43527c85d9549df25baa1922fe26a8da05f4d0cf14"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559380 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559394 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559408 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjqk7"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559423 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559436 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" event={"ID":"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f","Type":"ContainerStarted","Data":"e72a9e678ef8b6553b00a5a936fadc7ff84079c18dba69fa142128180872ad62"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559451 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" event={"ID":"5254d27a-3c04-4921-b5e9-272cc901663d","Type":"ContainerStarted","Data":"246e4dbcff53fa71e944a561377d85f4dfb114b4b9312fd69dd62f251583aba8"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.559471 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rqdb6"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.565342 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.566326 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.575790 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.575916 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.576213 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.576482 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.586491 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.634179 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.634497 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-catalog-content\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.634584 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.634655 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.634714 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzm2\" (UniqueName: \"kubernetes.io/projected/78a70363-f10e-4d12-8279-c7f7f3b8402b-kube-api-access-xjzm2\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.634772 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-utilities\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.634973 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:05.134940228 +0000 UTC m=+153.944892739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.637762 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.669838 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" event={"ID":"5ccaedd0-63de-4f5b-9106-b556e01fa2b8","Type":"ContainerStarted","Data":"218e8250c7d67e4455cae7e1da72301027c81e75cb106ecd51e292579e615970"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.669934 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-t68lp" event={"ID":"ae911826-fe03-4967-bdf1-f1eb5fc10ea4","Type":"ContainerStarted","Data":"af625fc3aa958cca53215a081aca89e455a121f9c970bd01a1875613b6839293"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.669974 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.670008 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" event={"ID":"d22355c6-2b0f-4caa-aa4b-92bd124103ad","Type":"ContainerStarted","Data":"3cdeb2b9ea28bcc597bf6b96301b8610a87d734049d987a424610365e9b68696"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.670038 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"7eb4bc420dc8634f8fbcf4c4f4f73493a80816a5c54df8496564429b9de14ca2"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.670062 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" event={"ID":"735555bc-661a-4a48-a615-c88944194992","Type":"ContainerStarted","Data":"4a3c7bf68c4e4c58893617aab3d1b598b5ac8347d97c5510bc2e3c22e7c8e040"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.670095 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d4cwn"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.670722 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.687698 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" event={"ID":"7e490e0b-11da-4093-bd3c-a328ebd6e304","Type":"ContainerStarted","Data":"dbbab273d11a065e81bbdef5ab9ad21c5a8c9e21d50ae4afa46e5aef61193b0b"} Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.687805 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4cwn"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.687895 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqdb6"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.687986 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkqnl"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.692902 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:04 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:04 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:04 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.692992 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.706847 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.714664 5123 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pj4ts container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.714759 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" podUID="7b7460e4-e37e-4643-9956-8097d8258066" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.716131 5123 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-rkcvb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.716267 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.718077 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkqnl"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.718131 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-shltm"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.720701 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.737975 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-catalog-content\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738449 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-catalog-content\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738494 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-catalog-content\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738554 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738599 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-catalog-content\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738651 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql4jg\" (UniqueName: \"kubernetes.io/projected/fb848e09-5c56-451f-a83b-d2e794432b47-kube-api-access-ql4jg\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738691 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-utilities\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738793 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xjzm2\" (UniqueName: \"kubernetes.io/projected/78a70363-f10e-4d12-8279-c7f7f3b8402b-kube-api-access-xjzm2\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738831 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-utilities\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738883 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45lwq\" (UniqueName: \"kubernetes.io/projected/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-kube-api-access-45lwq\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738912 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-utilities\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738927 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-utilities\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.738966 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrz5\" (UniqueName: \"kubernetes.io/projected/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-kube-api-access-nsrz5\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.740170 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-catalog-content\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.740968 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:05.24094664 +0000 UTC m=+154.050899151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.741656 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-utilities\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.769141 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-shltm"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.769239 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbdrq"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.786751 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.788165 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbdrq"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.788288 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8rkl4"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.825794 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.830686 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.833710 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.839973 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840453 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-utilities\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840517 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm5mj\" (UniqueName: \"kubernetes.io/projected/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-kube-api-access-bm5mj\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840564 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-utilities\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840588 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-catalog-content\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840617 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-catalog-content\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840665 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45lwq\" (UniqueName: \"kubernetes.io/projected/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-kube-api-access-45lwq\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840703 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkd85\" (UniqueName: \"kubernetes.io/projected/a077f03f-9a73-4019-912b-e2ebdf5308a5-kube-api-access-bkd85\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840733 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-utilities\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840769 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-utilities\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840847 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrz5\" (UniqueName: \"kubernetes.io/projected/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-kube-api-access-nsrz5\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840910 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-catalog-content\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.840938 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-utilities\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.841000 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-catalog-content\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.841057 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-catalog-content\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.841093 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ql4jg\" (UniqueName: \"kubernetes.io/projected/fb848e09-5c56-451f-a83b-d2e794432b47-kube-api-access-ql4jg\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.841646 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:05.341618793 +0000 UTC m=+154.151571324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.843001 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-utilities\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.843417 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-utilities\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.844065 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-catalog-content\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.846776 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-catalog-content\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.847084 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-catalog-content\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.868698 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-utilities\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.881152 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rkl4"] Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.881422 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943349 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-catalog-content\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943399 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-catalog-content\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943433 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-utilities\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943453 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtfc\" (UniqueName: \"kubernetes.io/projected/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-kube-api-access-zqtfc\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943486 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bkd85\" (UniqueName: \"kubernetes.io/projected/a077f03f-9a73-4019-912b-e2ebdf5308a5-kube-api-access-bkd85\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943517 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-utilities\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943578 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-utilities\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943637 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943702 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-catalog-content\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.943767 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bm5mj\" (UniqueName: \"kubernetes.io/projected/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-kube-api-access-bm5mj\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.944946 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-catalog-content\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.945552 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-catalog-content\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:04 crc kubenswrapper[5123]: E1212 15:22:04.953055 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:05.453028054 +0000 UTC m=+154.262980565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:04 crc kubenswrapper[5123]: I1212 15:22:04.953484 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-utilities\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:05 crc kubenswrapper[5123]: I1212 15:22:05.609722 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:05 crc kubenswrapper[5123]: E1212 15:22:05.610288 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:06.610233686 +0000 UTC m=+155.420186207 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:05 crc kubenswrapper[5123]: I1212 15:22:05.611423 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-catalog-content\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:05 crc kubenswrapper[5123]: I1212 15:22:05.611866 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-utilities\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:05 crc kubenswrapper[5123]: I1212 15:22:05.611977 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtfc\" (UniqueName: \"kubernetes.io/projected/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-kube-api-access-zqtfc\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:05 crc kubenswrapper[5123]: I1212 15:22:05.623984 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-catalog-content\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:05 crc kubenswrapper[5123]: I1212 15:22:05.629114 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-utilities\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:05 crc kubenswrapper[5123]: I1212 15:22:05.631387 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:05.701537 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nd2rm" podStartSLOduration=122.701510607 podStartE2EDuration="2m2.701510607s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:04.795895387 +0000 UTC m=+153.605847918" watchObservedRunningTime="2025-12-12 15:22:05.701510607 +0000 UTC m=+154.511463238" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:05.736485 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45lwq\" (UniqueName: \"kubernetes.io/projected/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-kube-api-access-45lwq\") pod \"certified-operators-d4cwn\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.114513 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-utilities\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.115723 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjzm2\" (UniqueName: \"kubernetes.io/projected/78a70363-f10e-4d12-8279-c7f7f3b8402b-kube-api-access-xjzm2\") pod \"certified-operators-fjqk7\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.119122 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:06 crc kubenswrapper[5123]: E1212 15:22:06.119863 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:06.619816083 +0000 UTC m=+155.429768594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.119968 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.120988 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.120983 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:06 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:06 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:06 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.122059 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.189493 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkd85\" (UniqueName: \"kubernetes.io/projected/a077f03f-9a73-4019-912b-e2ebdf5308a5-kube-api-access-bkd85\") pod \"redhat-operators-gbdrq\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.195450 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtfc\" (UniqueName: \"kubernetes.io/projected/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-kube-api-access-zqtfc\") pod \"redhat-marketplace-8rkl4\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.293966 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.294282 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.295485 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.295584 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 15:22:06 crc kubenswrapper[5123]: I1212 15:22:06.302444 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:07 crc kubenswrapper[5123]: E1212 15:22:07.412005 5123 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.772s" Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.416884 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrz5\" (UniqueName: \"kubernetes.io/projected/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-kube-api-access-nsrz5\") pod \"redhat-operators-pkqnl\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:07 crc kubenswrapper[5123]: E1212 15:22:07.709572 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:08.209535663 +0000 UTC m=+157.019488174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.710311 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.711042 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql4jg\" (UniqueName: \"kubernetes.io/projected/fb848e09-5c56-451f-a83b-d2e794432b47-kube-api-access-ql4jg\") pod \"community-operators-rqdb6\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.713337 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.713475 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.722384 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.752475 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.754660 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:22:07 crc kubenswrapper[5123]: E1212 15:22:07.754742 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:08.254703062 +0000 UTC m=+157.064655573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.763764 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:07 crc kubenswrapper[5123]: E1212 15:22:07.768207 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:08.268185456 +0000 UTC m=+157.078137967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:07 crc kubenswrapper[5123]: I1212 15:22:07.890915 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:07 crc kubenswrapper[5123]: E1212 15:22:07.891457 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:08.391429319 +0000 UTC m=+157.201381830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:07.954123 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm5mj\" (UniqueName: \"kubernetes.io/projected/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-kube-api-access-bm5mj\") pod \"redhat-marketplace-shltm\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:07.954422 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" event={"ID":"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0","Type":"ContainerStarted","Data":"fdf90a149b882c03803dcb034dd8db5b896a78e3d037d2d6734dbf318283ac9d"} Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:07.954495 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" event={"ID":"632abe1b-1a43-457c-86db-62fdb0572a0e","Type":"ContainerStarted","Data":"c10e252b391f41e2c1dc1e770418a530cccbeb0ad9c4ec0a64f1ab814c68862d"} Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.033358 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.034972 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jd7j9" event={"ID":"626346f0-e585-4a37-8c9b-c6e36ee113bc","Type":"ContainerStarted","Data":"97adf5867647edb19e0bb528ba76a4e914d578266fdc089d8f8271852def6c9d"} Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.036424 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:08 crc kubenswrapper[5123]: E1212 15:22:08.036868 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:08.536849339 +0000 UTC m=+157.346801840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.048337 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:08 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:08 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:08 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.048425 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.405462 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:08 crc kubenswrapper[5123]: E1212 15:22:08.407312 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:08.90727128 +0000 UTC m=+157.717223811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.411179 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" event={"ID":"788dd005-94a6-4a05-a0ce-c4dabe8dc04e","Type":"ContainerStarted","Data":"096634f7a5b337755976986ea5e83b0a167db519d9b1f2f56cd7d91158db4654"} Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.414785 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.414860 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.415003 5123 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-dc699 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.415118 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.534767 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:08 crc kubenswrapper[5123]: E1212 15:22:08.539308 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.039284488 +0000 UTC m=+157.849237009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.637817 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:08 crc kubenswrapper[5123]: E1212 15:22:08.638655 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.138631541 +0000 UTC m=+157.948584052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.825404 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:08 crc kubenswrapper[5123]: E1212 15:22:08.825931 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.325911926 +0000 UTC m=+158.135864437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:08 crc kubenswrapper[5123]: I1212 15:22:08.958103 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:08 crc kubenswrapper[5123]: E1212 15:22:08.958598 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.458556215 +0000 UTC m=+158.268508726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.008401 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:09 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:09 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:09 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.008534 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.059983 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.060492 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.560467457 +0000 UTC m=+158.370419978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.166144 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.166907 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.666869371 +0000 UTC m=+158.476821882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.243362 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.243593 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.269135 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.270020 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.769992702 +0000 UTC m=+158.579945213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.371138 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.372168 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.872133622 +0000 UTC m=+158.682086133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.477176 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.477985 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:09.977947357 +0000 UTC m=+158.787899868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.574489 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.574556 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.574617 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.575081 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9f50991963d4d04bcf2e4c9451b3fae1c9ded45ced042c68035628b937492228"} pod="openshift-console/downloads-747b44746d-xhd9t" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.575135 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" containerID="cri-o://9f50991963d4d04bcf2e4c9451b3fae1c9ded45ced042c68035628b937492228" gracePeriod=2 Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.575451 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.575470 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.588501 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-jd7j9" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.590509 5123 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-dc699 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.590586 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.792958 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.793202 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:10.293161634 +0000 UTC m=+159.103114145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.793738 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.797288 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:10.297262642 +0000 UTC m=+159.107215203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.884495 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:09 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:09 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:09 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.884921 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:09 crc kubenswrapper[5123]: I1212 15:22:09.901991 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:09 crc kubenswrapper[5123]: E1212 15:22:09.902640 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:10.402610663 +0000 UTC m=+159.212563174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.080327 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.080763 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:10.580743192 +0000 UTC m=+159.390695703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.181367 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.181821 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:10.681795957 +0000 UTC m=+159.491748458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.271620 5123 patch_prober.go:28] interesting pod/console-64d44f6ddf-96rdx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.271713 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-96rdx" podUID="7ff811e4-3864-456b-8e00-b9e2d1c49ed8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.293508 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.294277 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:10.794258042 +0000 UTC m=+159.604210553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.397037 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.397448 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:10.897421294 +0000 UTC m=+159.707373805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.501022 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.502042 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.506838 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.006813802 +0000 UTC m=+159.816766313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.552669 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.554870 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.615052 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.615304 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.615362 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.615586 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.115547419 +0000 UTC m=+159.925499930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.622434 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.622875 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.633275 5123 generic.go:358] "Generic (PLEG): container finished" podID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerID="9f50991963d4d04bcf2e4c9451b3fae1c9ded45ced042c68035628b937492228" exitCode=0 Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.633770 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerDied","Data":"9f50991963d4d04bcf2e4c9451b3fae1c9ded45ced042c68035628b937492228"} Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.672422 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jd7j9" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.718102 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.718464 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.719587 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.720289 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.720429 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.220410464 +0000 UTC m=+160.030362965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.747513 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:10 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:10 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:10 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.747598 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.940445 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:10 crc kubenswrapper[5123]: E1212 15:22:10.940942 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.440908304 +0000 UTC m=+160.250860815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.947685 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-t8xgq" podStartSLOduration=126.947622035 podStartE2EDuration="2m6.947622035s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:10.678505537 +0000 UTC m=+159.488458048" watchObservedRunningTime="2025-12-12 15:22:10.947622035 +0000 UTC m=+159.757574546" Dec 12 15:22:10 crc kubenswrapper[5123]: I1212 15:22:10.976363 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.061583 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:11 crc kubenswrapper[5123]: E1212 15:22:11.062588 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.562555667 +0000 UTC m=+160.372508178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.063306 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" podStartSLOduration=127.06328348 podStartE2EDuration="2m7.06328348s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:11.020585688 +0000 UTC m=+159.830538219" watchObservedRunningTime="2025-12-12 15:22:11.06328348 +0000 UTC m=+159.873235991" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.126661 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-kvxss" podStartSLOduration=127.12662922 podStartE2EDuration="2m7.12662922s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:11.112135615 +0000 UTC m=+159.922088126" watchObservedRunningTime="2025-12-12 15:22:11.12662922 +0000 UTC m=+159.936581731" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.164041 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:11 crc kubenswrapper[5123]: E1212 15:22:11.164461 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.664433279 +0000 UTC m=+160.474385790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.205460 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" podStartSLOduration=127.205424427 podStartE2EDuration="2m7.205424427s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:11.191821939 +0000 UTC m=+160.001774470" watchObservedRunningTime="2025-12-12 15:22:11.205424427 +0000 UTC m=+160.015376938" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.255013 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.272323 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:11 crc kubenswrapper[5123]: E1212 15:22:11.272837 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.772819675 +0000 UTC m=+160.582772186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.273627 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.273671 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.273733 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.347944 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.355118 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"5d3099286f7fd25b3104336aaf7c27da1dad367a5c12eb074905fdaf34882398"} pod="openshift-console-operator/console-operator-67c89758df-vqqzf" containerMessage="Container console-operator failed liveness probe, will be restarted" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.355264 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" containerID="cri-o://5d3099286f7fd25b3104336aaf7c27da1dad367a5c12eb074905fdaf34882398" gracePeriod=30 Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.374165 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:11 crc kubenswrapper[5123]: E1212 15:22:11.374466 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.874442099 +0000 UTC m=+160.684394610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.438878 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.438980 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.477524 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:11 crc kubenswrapper[5123]: E1212 15:22:11.478433 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:11.978410216 +0000 UTC m=+160.788362787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.537120 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bbdv4" podStartSLOduration=127.53709536 podStartE2EDuration="2m7.53709536s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:11.535253372 +0000 UTC m=+160.345205913" watchObservedRunningTime="2025-12-12 15:22:11.53709536 +0000 UTC m=+160.347047871" Dec 12 15:22:11 crc kubenswrapper[5123]: I1212 15:22:11.585898 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:11 crc kubenswrapper[5123]: E1212 15:22:11.586737 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:12.086710199 +0000 UTC m=+160.896662720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:11.694995 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:11.695787 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:12.195762446 +0000 UTC m=+161.005714967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.183841 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.184654 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:12.684619499 +0000 UTC m=+161.494572020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.201961 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:12 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:12 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:12 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.202088 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.236202 5123 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-rkcvb container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.236329 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.237237 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.237273 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.286613 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.287536 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:12.787506613 +0000 UTC m=+161.597459114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.305887 5123 ???:1] "http: TLS handshake error from 192.168.126.11:35268: no serving certificate available for the kubelet" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.316990 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.317098 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbt5r"] Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.317136 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.317169 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.338449 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"e72a9e678ef8b6553b00a5a936fadc7ff84079c18dba69fa142128180872ad62"} pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.338547 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" containerID="cri-o://e72a9e678ef8b6553b00a5a936fadc7ff84079c18dba69fa142128180872ad62" gracePeriod=30 Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.354484 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.385644 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.388867 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" event={"ID":"e077c741-1ed0-4ffa-80a7-6ce54aab5fe0","Type":"ContainerStarted","Data":"ea60a06601fede8b42b6e9793317410de4956b3ba709ae0d9cf2141b9ba906be"} Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.393830 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.396082 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:12.896023533 +0000 UTC m=+161.705976044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.404318 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.407464 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:12.907435892 +0000 UTC m=+161.717388403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.423815 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mvm2v" podStartSLOduration=128.423781605 podStartE2EDuration="2m8.423781605s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:12.356847352 +0000 UTC m=+161.166799893" watchObservedRunningTime="2025-12-12 15:22:12.423781605 +0000 UTC m=+161.233734116" Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.468859 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.469024 5123 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 15:22:12 crc kubenswrapper[5123]: I1212 15:22:12.506079 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:12 crc kubenswrapper[5123]: E1212 15:22:12.506716 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:13.00668612 +0000 UTC m=+161.816638631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:12.769250 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:13 crc kubenswrapper[5123]: E1212 15:22:12.770019 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:13.269986975 +0000 UTC m=+162.079939486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:12.772994 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jd7j9" podStartSLOduration=45.772958379 podStartE2EDuration="45.772958379s" podCreationTimestamp="2025-12-12 15:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:12.77078375 +0000 UTC m=+161.580736261" watchObservedRunningTime="2025-12-12 15:22:12.772958379 +0000 UTC m=+161.582910890" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.184181 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:13 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:13 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:13 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.184305 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.192148 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:13 crc kubenswrapper[5123]: E1212 15:22:13.192837 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:13.692806563 +0000 UTC m=+162.502759074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.255542 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" podStartSLOduration=129.255503054 podStartE2EDuration="2m9.255503054s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:13.192888386 +0000 UTC m=+162.002840917" watchObservedRunningTime="2025-12-12 15:22:13.255503054 +0000 UTC m=+162.065455625" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.287424 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": read tcp 10.217.0.2:42626->10.217.0.33:8443: read: connection reset by peer" start-of-body= Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.287561 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": read tcp 10.217.0.2:42626->10.217.0.33:8443: read: connection reset by peer" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.296261 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:13 crc kubenswrapper[5123]: E1212 15:22:13.296930 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:13.796910515 +0000 UTC m=+162.606863026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.397458 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:13 crc kubenswrapper[5123]: E1212 15:22:13.398422 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:13.898388514 +0000 UTC m=+162.708341025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.483522 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-hznms" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.504799 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:13 crc kubenswrapper[5123]: E1212 15:22:13.506405 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:14.006385788 +0000 UTC m=+162.816338299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.550830 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.551953 5123 generic.go:358] "Generic (PLEG): container finished" podID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerID="e72a9e678ef8b6553b00a5a936fadc7ff84079c18dba69fa142128180872ad62" exitCode=255 Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.552072 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" event={"ID":"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f","Type":"ContainerDied","Data":"e72a9e678ef8b6553b00a5a936fadc7ff84079c18dba69fa142128180872ad62"} Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.763688 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:13 crc kubenswrapper[5123]: E1212 15:22:13.764099 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:14.264076396 +0000 UTC m=+163.074028907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.780296 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:13 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:13 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:13 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.780726 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.828442 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbt5r" event={"ID":"402bc75d-15b2-46d8-9455-d2d8c8c7c47a","Type":"ContainerStarted","Data":"56b1ed5c4799e17bec86d89b50121ffdc3f3db309d13cd5a777be83e2bf8a43e"} Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.831786 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerStarted","Data":"8c5617ac20c35a29d82dc82e0862bfeca794ccb2aa8a303b5c55da1094a7bf3b"} Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.861716 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.861796 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.869329 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:13 crc kubenswrapper[5123]: E1212 15:22:13.869708 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:14.369691895 +0000 UTC m=+163.179644406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.875128 5123 generic.go:358] "Generic (PLEG): container finished" podID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerID="5d3099286f7fd25b3104336aaf7c27da1dad367a5c12eb074905fdaf34882398" exitCode=0 Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.876566 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" event={"ID":"9da0a55f-2526-45cc-b820-1b31ce63745c","Type":"ContainerDied","Data":"5d3099286f7fd25b3104336aaf7c27da1dad367a5c12eb074905fdaf34882398"} Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.925814 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjqk7"] Dec 12 15:22:13 crc kubenswrapper[5123]: I1212 15:22:13.963306 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkqnl"] Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.147121 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.151472 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:14.6514376 +0000 UTC m=+163.461390121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.184173 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" podStartSLOduration=131.184145317 podStartE2EDuration="2m11.184145317s" podCreationTimestamp="2025-12-12 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:14.181030629 +0000 UTC m=+162.990983160" watchObservedRunningTime="2025-12-12 15:22:14.184145317 +0000 UTC m=+162.994097828" Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.252338 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.254831 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:14.754781638 +0000 UTC m=+163.564734149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.353728 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.354065 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:14.854038506 +0000 UTC m=+163.663991017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.422871 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4cwn"] Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.422974 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rkl4"] Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.462411 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.463190 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:14.963165126 +0000 UTC m=+163.773117637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: W1212 15:22:14.473453 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode501d3fb_0bf6_4f90_bafb_521b5f6c8b9e.slice/crio-2d57bc3c969d150a1e42c66401272e08ec36a042f14f42c2448d401aa89747ea WatchSource:0}: Error finding container 2d57bc3c969d150a1e42c66401272e08ec36a042f14f42c2448d401aa89747ea: Status 404 returned error can't find the container with id 2d57bc3c969d150a1e42c66401272e08ec36a042f14f42c2448d401aa89747ea Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.545331 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbdrq"] Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.564482 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.565019 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.064978056 +0000 UTC m=+163.874930567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.666616 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.667166 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.167141117 +0000 UTC m=+163.977093628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.694621 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:14 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:14 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:14 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.694718 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.725034 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pj4ts" Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.735609 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.782625 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.785062 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.285034061 +0000 UTC m=+164.094986562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.892692 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.894807 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.39477476 +0000 UTC m=+164.204727271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.996784 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqdb6"] Dec 12 15:22:14 crc kubenswrapper[5123]: I1212 15:22:14.998101 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:14 crc kubenswrapper[5123]: E1212 15:22:14.998693 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.498668945 +0000 UTC m=+164.308621456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.001109 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.013290 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.017164 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-shltm"] Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.022571 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.027152 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkqnl" event={"ID":"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a","Type":"ContainerStarted","Data":"a9b57ebe925462c47f978d2cff3e2c22255c7a2ca7cfb54f8976a03c006fd74c"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.060411 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" event={"ID":"9da0a55f-2526-45cc-b820-1b31ce63745c","Type":"ContainerStarted","Data":"e25515e4a977c08d1ab2ec5d527e004f5dd8e37807559658aa99bac61781fbd9"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.061742 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.062682 5123 patch_prober.go:28] interesting pod/console-operator-67c89758df-vqqzf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.062741 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" podUID="9da0a55f-2526-45cc-b820-1b31ce63745c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.063390 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbdrq" event={"ID":"a077f03f-9a73-4019-912b-e2ebdf5308a5","Type":"ContainerStarted","Data":"c4edcf1dd0128f0a8def4d9b1e0f3faa94c554c18dbc4517cf0bd8202b55c09d"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.065776 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjqk7" event={"ID":"78a70363-f10e-4d12-8279-c7f7f3b8402b","Type":"ContainerStarted","Data":"6258ab63449fccde638e76827424295ac034d2404fbcc7f6880beb215d11fc41"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.067318 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rkl4" event={"ID":"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e","Type":"ContainerStarted","Data":"2d57bc3c969d150a1e42c66401272e08ec36a042f14f42c2448d401aa89747ea"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.069783 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.070210 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" event={"ID":"2c1e4fb9-bde9-46df-8ac0-c0b457ca767f","Type":"ContainerStarted","Data":"37cccfad311739104f7b5afd9300fd0812a8f085c6d0ac0b2938d3295540bcbe"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.071500 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.073157 5123 generic.go:358] "Generic (PLEG): container finished" podID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerID="eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071" exitCode=0 Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.073319 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbt5r" event={"ID":"402bc75d-15b2-46d8-9455-d2d8c8c7c47a","Type":"ContainerDied","Data":"eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.082469 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4cwn" event={"ID":"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56","Type":"ContainerStarted","Data":"0a26bab4e793dea4776bc49dfa5b80efc381a66651e4316acd1b779023977569"} Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.082582 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.082663 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.082704 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.111653 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.113153 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.613124272 +0000 UTC m=+164.423076783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.165703 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-qvmj6" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.216970 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.222664 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.722615353 +0000 UTC m=+164.532567874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.322452 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.322910 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.822892524 +0000 UTC m=+164.632845035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.425152 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.425645 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:15.925621393 +0000 UTC m=+164.735573904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.527729 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.528115 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.028098053 +0000 UTC m=+164.838050564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.641364 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.641652 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.141606461 +0000 UTC m=+164.951558982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.641759 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.642392 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.142381175 +0000 UTC m=+164.952333686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.737314 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:15 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:15 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:15 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.737435 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:15 crc kubenswrapper[5123]: I1212 15:22:15.747050 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:15 crc kubenswrapper[5123]: E1212 15:22:15.747625 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.247598702 +0000 UTC m=+165.057551213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.076329 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.076521 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.076621 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.076760 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.576744665 +0000 UTC m=+165.386697176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.182181 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.189755 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.689718106 +0000 UTC m=+165.499670617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.284987 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.285617 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.785497056 +0000 UTC m=+165.595449567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.300140 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f","Type":"ContainerStarted","Data":"40e73eb80cb2006d443c6ae71201796bf001ac67610d543fa39fa6cd8d434517"} Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.322013 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shltm" event={"ID":"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7","Type":"ContainerStarted","Data":"2ab0aff90c5e85fb5fff5087d4db5f0028230fac71d95e907064bbb3ad87537d"} Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.329735 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"09029143-1cd7-445a-bcff-2e8cd5d5a8b9","Type":"ContainerStarted","Data":"9e270e7b5f59bce45ac70f9b6e74446ac499511192820eb24524bc5c2b9590bb"} Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.339771 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerStarted","Data":"57e35175250b638d7ffb6699dc5d86241d085fa475cd04fa14368ed90243bcca"} Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.376033 5123 generic.go:358] "Generic (PLEG): container finished" podID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerID="99ab9f695d43d6110d72eff516a314cdc7a95bed2698616a7362090622b380d5" exitCode=0 Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.376725 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4cwn" event={"ID":"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56","Type":"ContainerDied","Data":"99ab9f695d43d6110d72eff516a314cdc7a95bed2698616a7362090622b380d5"} Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.388282 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.390474 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.890431133 +0000 UTC m=+165.700383654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.392724 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.393647 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:16.893605453 +0000 UTC m=+165.703557964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.412393 5123 generic.go:358] "Generic (PLEG): container finished" podID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerID="851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217" exitCode=0 Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.413018 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkqnl" event={"ID":"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a","Type":"ContainerDied","Data":"851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217"} Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.461526 5123 generic.go:358] "Generic (PLEG): container finished" podID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerID="d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd" exitCode=0 Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.461629 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjqk7" event={"ID":"78a70363-f10e-4d12-8279-c7f7f3b8402b","Type":"ContainerDied","Data":"d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd"} Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.467494 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.467773 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.514929 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.516134 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.016075732 +0000 UTC m=+165.826028263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.518857 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-vqqzf" Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.617760 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.629417 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.129367702 +0000 UTC m=+165.939320213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.719453 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.720877 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.220826566 +0000 UTC m=+166.030779077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.822542 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.823669 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.323640387 +0000 UTC m=+166.133592898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.884652 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:16 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:16 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:16 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.884770 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.934428 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.934843 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.43476961 +0000 UTC m=+166.244722121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:16 crc kubenswrapper[5123]: I1212 15:22:16.935839 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:16 crc kubenswrapper[5123]: E1212 15:22:16.937529 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.437505006 +0000 UTC m=+166.247457517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.037922 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mdpg8_6eb483de-06e5-4975-b29a-7fd9bc7674a9/kube-multus-additional-cni-plugins/0.log" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.038035 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.039382 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.040738 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.540678109 +0000 UTC m=+166.350630620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.141528 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6eb483de-06e5-4975-b29a-7fd9bc7674a9-cni-sysctl-allowlist\") pod \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.141645 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6eb483de-06e5-4975-b29a-7fd9bc7674a9-tuning-conf-dir\") pod \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.141864 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eb483de-06e5-4975-b29a-7fd9bc7674a9-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "6eb483de-06e5-4975-b29a-7fd9bc7674a9" (UID: "6eb483de-06e5-4975-b29a-7fd9bc7674a9"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.142119 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6eb483de-06e5-4975-b29a-7fd9bc7674a9-ready\") pod \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.142241 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44vvl\" (UniqueName: \"kubernetes.io/projected/6eb483de-06e5-4975-b29a-7fd9bc7674a9-kube-api-access-44vvl\") pod \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\" (UID: \"6eb483de-06e5-4975-b29a-7fd9bc7674a9\") " Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.142798 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.142878 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb483de-06e5-4975-b29a-7fd9bc7674a9-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "6eb483de-06e5-4975-b29a-7fd9bc7674a9" (UID: "6eb483de-06e5-4975-b29a-7fd9bc7674a9"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.142989 5123 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6eb483de-06e5-4975-b29a-7fd9bc7674a9-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.143198 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb483de-06e5-4975-b29a-7fd9bc7674a9-ready" (OuterVolumeSpecName: "ready") pod "6eb483de-06e5-4975-b29a-7fd9bc7674a9" (UID: "6eb483de-06e5-4975-b29a-7fd9bc7674a9"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.143297 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.643279753 +0000 UTC m=+166.453232334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.158585 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb483de-06e5-4975-b29a-7fd9bc7674a9-kube-api-access-44vvl" (OuterVolumeSpecName: "kube-api-access-44vvl") pod "6eb483de-06e5-4975-b29a-7fd9bc7674a9" (UID: "6eb483de-06e5-4975-b29a-7fd9bc7674a9"). InnerVolumeSpecName "kube-api-access-44vvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.244833 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.245441 5123 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6eb483de-06e5-4975-b29a-7fd9bc7674a9-ready\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.245481 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-44vvl\" (UniqueName: \"kubernetes.io/projected/6eb483de-06e5-4975-b29a-7fd9bc7674a9-kube-api-access-44vvl\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.245513 5123 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6eb483de-06e5-4975-b29a-7fd9bc7674a9-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.245652 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.745612539 +0000 UTC m=+166.555565060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.357097 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.357637 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.857617699 +0000 UTC m=+166.667570210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.458689 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.459269 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:17.959198971 +0000 UTC m=+166.769151482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.510255 5123 generic.go:358] "Generic (PLEG): container finished" podID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerID="37117c5d3fd92e669520d96463181bb4124525ab951b1ea7731721953cfb212b" exitCode=0 Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.514208 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbdrq" event={"ID":"a077f03f-9a73-4019-912b-e2ebdf5308a5","Type":"ContainerDied","Data":"37117c5d3fd92e669520d96463181bb4124525ab951b1ea7731721953cfb212b"} Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.539626 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mdpg8_6eb483de-06e5-4975-b29a-7fd9bc7674a9/kube-multus-additional-cni-plugins/0.log" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.539706 5123 generic.go:358] "Generic (PLEG): container finished" podID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" exitCode=137 Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.539871 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" event={"ID":"6eb483de-06e5-4975-b29a-7fd9bc7674a9","Type":"ContainerDied","Data":"98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f"} Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.539926 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" event={"ID":"6eb483de-06e5-4975-b29a-7fd9bc7674a9","Type":"ContainerDied","Data":"6e0fc4e8fe9f057c2796b5c395dbb8c0c5d5ccda1bf3c99820bf70afaf916bc3"} Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.539962 5123 scope.go:117] "RemoveContainer" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.540248 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mdpg8" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.565854 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.566808 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.066787082 +0000 UTC m=+166.876739593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.627785 5123 generic.go:358] "Generic (PLEG): container finished" podID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerID="790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29" exitCode=0 Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.628715 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shltm" event={"ID":"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7","Type":"ContainerDied","Data":"790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29"} Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.631157 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=7.631131214 podStartE2EDuration="7.631131214s" podCreationTimestamp="2025-12-12 15:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:17.62687433 +0000 UTC m=+166.436826861" watchObservedRunningTime="2025-12-12 15:22:17.631131214 +0000 UTC m=+166.441083725" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.671360 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.671947 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.171913636 +0000 UTC m=+166.981866147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.672007 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.673828 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.173810886 +0000 UTC m=+166.983763397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.682755 5123 generic.go:358] "Generic (PLEG): container finished" podID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerID="a99beb32dbeae741e5318257f071a1ec2230f26f292b5b0702ed34040521923d" exitCode=0 Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.696372 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:17 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:17 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:17 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.696502 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.774698 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.776040 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.276005207 +0000 UTC m=+167.085957718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.808917 5123 scope.go:117] "RemoveContainer" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.821948 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f\": container with ID starting with 98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f not found: ID does not exist" containerID="98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.822114 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f"} err="failed to get container status \"98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f\": rpc error: code = NotFound desc = could not find container \"98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f\": container with ID starting with 98b4e7c2a0fa09f214b8cbc48f52ccee3a3c7806bdd076dfd394403ea4fe6e1f not found: ID does not exist" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.847048 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rkl4" event={"ID":"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e","Type":"ContainerDied","Data":"a99beb32dbeae741e5318257f071a1ec2230f26f292b5b0702ed34040521923d"} Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.847148 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerStarted","Data":"9873a93e06932c15ba7fcee0b9942311dee2e9920c3ff8c6411ec103090044c0"} Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.882132 5123 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb483de_06e5_4975_b29a_7fd9bc7674a9.slice/crio-6e0fc4e8fe9f057c2796b5c395dbb8c0c5d5ccda1bf3c99820bf70afaf916bc3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb483de_06e5_4975_b29a_7fd9bc7674a9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb848e09_5c56_451f_a83b_d2e794432b47.slice/crio-conmon-9873a93e06932c15ba7fcee0b9942311dee2e9920c3ff8c6411ec103090044c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb848e09_5c56_451f_a83b_d2e794432b47.slice/crio-9873a93e06932c15ba7fcee0b9942311dee2e9920c3ff8c6411ec103090044c0.scope\": RecentStats: unable to find data in memory cache]" Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.892807 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.893851 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.39382801 +0000 UTC m=+167.203780521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:17 crc kubenswrapper[5123]: I1212 15:22:17.994579 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:17 crc kubenswrapper[5123]: E1212 15:22:17.995450 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.495403542 +0000 UTC m=+167.305356053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.064823 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mdpg8"] Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.071029 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mdpg8"] Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.098415 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:18 crc kubenswrapper[5123]: E1212 15:22:18.098858 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.598838553 +0000 UTC m=+167.408791114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.200567 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:18 crc kubenswrapper[5123]: E1212 15:22:18.201031 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.700989922 +0000 UTC m=+167.510942433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.303151 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:18 crc kubenswrapper[5123]: E1212 15:22:18.303719 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:18.803699361 +0000 UTC m=+167.613651872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.529230 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:18 crc kubenswrapper[5123]: E1212 15:22:18.529636 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:19.029602998 +0000 UTC m=+167.839555519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.655444 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:18 crc kubenswrapper[5123]: E1212 15:22:18.656141 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:19.156121676 +0000 UTC m=+167.966074177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.688451 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:18 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:18 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:18 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.688617 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.726124 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": context deadline exceeded" start-of-body= Dec 12 15:22:18 crc kubenswrapper[5123]: I1212 15:22:18.726243 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": context deadline exceeded" Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.087280 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:19 crc kubenswrapper[5123]: E1212 15:22:19.087901 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:19.587848044 +0000 UTC m=+168.397800555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.114396 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"09029143-1cd7-445a-bcff-2e8cd5d5a8b9","Type":"ContainerStarted","Data":"7ded8173ab1b2cd634e956845e2874acd9535d92a81763ba4e069e186d5aa9f0"} Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.120680 5123 generic.go:358] "Generic (PLEG): container finished" podID="fb848e09-5c56-451f-a83b-d2e794432b47" containerID="9873a93e06932c15ba7fcee0b9942311dee2e9920c3ff8c6411ec103090044c0" exitCode=0 Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.121026 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerDied","Data":"9873a93e06932c15ba7fcee0b9942311dee2e9920c3ff8c6411ec103090044c0"} Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.133678 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f","Type":"ContainerStarted","Data":"b403c0a969caf6bcd8e5eba1fbf835dfe54c0840cefd7e0be46dd52e5f32a859"} Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.146954 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=17.146923042 podStartE2EDuration="17.146923042s" podCreationTimestamp="2025-12-12 15:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:19.138797286 +0000 UTC m=+167.948749817" watchObservedRunningTime="2025-12-12 15:22:19.146923042 +0000 UTC m=+167.956875553" Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.190021 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:19 crc kubenswrapper[5123]: E1212 15:22:19.190592 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:19.690573736 +0000 UTC m=+168.500526237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.314609 5123 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-9j9pt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": context deadline exceeded" start-of-body= Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.314706 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" podUID="2c1e4fb9-bde9-46df-8ac0-c0b457ca767f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": context deadline exceeded" Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.314849 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:19 crc kubenswrapper[5123]: E1212 15:22:19.316395 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:19.816370305 +0000 UTC m=+168.626322826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.321406 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.321505 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:19 crc kubenswrapper[5123]: I1212 15:22:19.416634 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:19 crc kubenswrapper[5123]: E1212 15:22:19.417095 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:19.917071714 +0000 UTC m=+168.727024215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.567514 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:20 crc kubenswrapper[5123]: E1212 15:22:20.568133 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:21.56809116 +0000 UTC m=+170.378043671 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.589037 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:20 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:20 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:20 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.589184 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.615911 5123 patch_prober.go:28] interesting pod/console-64d44f6ddf-96rdx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.616026 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-96rdx" podUID="7ff811e4-3864-456b-8e00-b9e2d1c49ed8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.669917 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:20 crc kubenswrapper[5123]: E1212 15:22:20.670567 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:21.170534894 +0000 UTC m=+169.980487405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.680706 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" path="/var/lib/kubelet/pods/6eb483de-06e5-4975-b29a-7fd9bc7674a9/volumes" Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.681674 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.728424 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:20 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:20 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:20 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.728543 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.778623 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:20 crc kubenswrapper[5123]: E1212 15:22:20.781791 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:21.281774314 +0000 UTC m=+170.091726825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.800116 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-9j9pt" Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.855369 5123 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-bmckw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]log ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]etcd ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/max-in-flight-filter ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 15:22:20 crc kubenswrapper[5123]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 15:22:20 crc kubenswrapper[5123]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-startinformers ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 15:22:20 crc kubenswrapper[5123]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 15:22:20 crc kubenswrapper[5123]: livez check failed Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.855483 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" podUID="e077c741-1ed0-4ffa-80a7-6ce54aab5fe0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.882160 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:20 crc kubenswrapper[5123]: E1212 15:22:20.882705 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:21.382660279 +0000 UTC m=+170.192612790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:20 crc kubenswrapper[5123]: I1212 15:22:20.883493 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:20 crc kubenswrapper[5123]: E1212 15:22:20.884039 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:21.384027113 +0000 UTC m=+170.193979624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:20.985153 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:20.985566 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:21.485511077 +0000 UTC m=+170.295463598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:20.986562 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:20.987186 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:21.487170759 +0000 UTC m=+170.297123270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.519939 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:21.520736 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.52070558 +0000 UTC m=+171.330658091 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.523969 5123 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-bmckw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]log ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]etcd ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/max-in-flight-filter ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 15:22:21 crc kubenswrapper[5123]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-startinformers ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 15:22:21 crc kubenswrapper[5123]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 15:22:21 crc kubenswrapper[5123]: livez check failed Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.524098 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" podUID="e077c741-1ed0-4ffa-80a7-6ce54aab5fe0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.622017 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:21.622653 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.122616176 +0000 UTC m=+170.932568687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.721101 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:21 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:21 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:21 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.721230 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.724191 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:21.724635 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.224620157 +0000 UTC m=+171.034572668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.826919 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:21.827182 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.327135093 +0000 UTC m=+171.137087604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.827840 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:21.828247 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.328238728 +0000 UTC m=+171.138191239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.863132 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-t4m4d"] Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.863595 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" podUID="bf62556f-373c-41a0-96d4-8f431d629029" containerName="controller-manager" containerID="cri-o://375a45d9be110aa8622d28c0a201e3267a214e3ecf249ce0a329e603a08e0a31" gracePeriod=30 Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.912095 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699"] Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.912474 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerName="route-controller-manager" containerID="cri-o://218e8250c7d67e4455cae7e1da72301027c81e75cb106ecd51e292579e615970" gracePeriod=30 Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.929251 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:21.929723 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.429663309 +0000 UTC m=+171.239615820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:21 crc kubenswrapper[5123]: I1212 15:22:21.930352 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:21 crc kubenswrapper[5123]: E1212 15:22:21.931446 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.431409775 +0000 UTC m=+171.241362296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.036073 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.036281 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.536241603 +0000 UTC m=+171.346194114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.037290 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.037896 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.537873145 +0000 UTC m=+171.347825656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.139440 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.140011 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.639981068 +0000 UTC m=+171.449933579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.242119 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.244316 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.744283099 +0000 UTC m=+171.554235620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.362557 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.362910 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.862880317 +0000 UTC m=+171.672832838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.363150 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.363769 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.863755044 +0000 UTC m=+171.673707555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.465761 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.466819 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.966756571 +0000 UTC m=+171.776709082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.471383 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.472061 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:22.972036067 +0000 UTC m=+171.781988578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.613630 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.614226 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.114183424 +0000 UTC m=+171.924135935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.708956 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:22 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:22 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:22 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.709055 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.715816 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.716381 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.216360685 +0000 UTC m=+172.026313196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.779077 5123 generic.go:358] "Generic (PLEG): container finished" podID="bf62556f-373c-41a0-96d4-8f431d629029" containerID="375a45d9be110aa8622d28c0a201e3267a214e3ecf249ce0a329e603a08e0a31" exitCode=0 Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.779167 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" event={"ID":"bf62556f-373c-41a0-96d4-8f431d629029","Type":"ContainerDied","Data":"375a45d9be110aa8622d28c0a201e3267a214e3ecf249ce0a329e603a08e0a31"} Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.784819 5123 generic.go:358] "Generic (PLEG): container finished" podID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerID="218e8250c7d67e4455cae7e1da72301027c81e75cb106ecd51e292579e615970" exitCode=0 Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.784958 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" event={"ID":"5ccaedd0-63de-4f5b-9106-b556e01fa2b8","Type":"ContainerDied","Data":"218e8250c7d67e4455cae7e1da72301027c81e75cb106ecd51e292579e615970"} Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.791099 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" event={"ID":"68ef1469-eefc-4e7d-b8a5-bf0550b84694","Type":"ContainerStarted","Data":"c5bbed2f17b530b7ecc2b9e3df57c851e1ad93ffee1411e827e90efa8f3e15cf"} Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.888508 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.889387 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.389090244 +0000 UTC m=+172.199042755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:22 crc kubenswrapper[5123]: I1212 15:22:22.995019 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:22 crc kubenswrapper[5123]: E1212 15:22:22.995596 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.49557834 +0000 UTC m=+172.305530851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.069786 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.079397 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.097269 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.597238195 +0000 UTC m=+172.407190706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.097678 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.102660 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.102871 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.60281587 +0000 UTC m=+172.412768381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.175320 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj"] Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176605 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerName="kube-multus-additional-cni-plugins" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176651 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerName="kube-multus-additional-cni-plugins" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176676 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bf62556f-373c-41a0-96d4-8f431d629029" containerName="controller-manager" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176686 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf62556f-373c-41a0-96d4-8f431d629029" containerName="controller-manager" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176723 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerName="route-controller-manager" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176733 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerName="route-controller-manager" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176950 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="bf62556f-373c-41a0-96d4-8f431d629029" containerName="controller-manager" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.176977 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" containerName="route-controller-manager" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.177016 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="6eb483de-06e5-4975-b29a-7fd9bc7674a9" containerName="kube-multus-additional-cni-plugins" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.204911 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-config\") pod \"bf62556f-373c-41a0-96d4-8f431d629029\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205336 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205383 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-config\") pod \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205425 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf62556f-373c-41a0-96d4-8f431d629029-tmp\") pod \"bf62556f-373c-41a0-96d4-8f431d629029\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205552 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75855\" (UniqueName: \"kubernetes.io/projected/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-kube-api-access-75855\") pod \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205598 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf62556f-373c-41a0-96d4-8f431d629029-serving-cert\") pod \"bf62556f-373c-41a0-96d4-8f431d629029\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.205652 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.705616311 +0000 UTC m=+172.515568822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205685 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-tmp\") pod \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205731 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-serving-cert\") pod \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205768 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nx5j\" (UniqueName: \"kubernetes.io/projected/bf62556f-373c-41a0-96d4-8f431d629029-kube-api-access-8nx5j\") pod \"bf62556f-373c-41a0-96d4-8f431d629029\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205798 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-client-ca\") pod \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\" (UID: \"5ccaedd0-63de-4f5b-9106-b556e01fa2b8\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205815 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-client-ca\") pod \"bf62556f-373c-41a0-96d4-8f431d629029\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.205860 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-proxy-ca-bundles\") pod \"bf62556f-373c-41a0-96d4-8f431d629029\" (UID: \"bf62556f-373c-41a0-96d4-8f431d629029\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.206006 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.206174 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf62556f-373c-41a0-96d4-8f431d629029-tmp" (OuterVolumeSpecName: "tmp") pod "bf62556f-373c-41a0-96d4-8f431d629029" (UID: "bf62556f-373c-41a0-96d4-8f431d629029"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.206475 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.706464267 +0000 UTC m=+172.516416778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.206737 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-tmp" (OuterVolumeSpecName: "tmp") pod "5ccaedd0-63de-4f5b-9106-b556e01fa2b8" (UID: "5ccaedd0-63de-4f5b-9106-b556e01fa2b8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.207333 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf62556f-373c-41a0-96d4-8f431d629029" (UID: "bf62556f-373c-41a0-96d4-8f431d629029"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.207452 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bf62556f-373c-41a0-96d4-8f431d629029" (UID: "bf62556f-373c-41a0-96d4-8f431d629029"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.207896 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-config" (OuterVolumeSpecName: "config") pod "bf62556f-373c-41a0-96d4-8f431d629029" (UID: "bf62556f-373c-41a0-96d4-8f431d629029"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.208060 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ccaedd0-63de-4f5b-9106-b556e01fa2b8" (UID: "5ccaedd0-63de-4f5b-9106-b556e01fa2b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.208115 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-config" (OuterVolumeSpecName: "config") pod "5ccaedd0-63de-4f5b-9106-b556e01fa2b8" (UID: "5ccaedd0-63de-4f5b-9106-b556e01fa2b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.223470 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf62556f-373c-41a0-96d4-8f431d629029-kube-api-access-8nx5j" (OuterVolumeSpecName: "kube-api-access-8nx5j") pod "bf62556f-373c-41a0-96d4-8f431d629029" (UID: "bf62556f-373c-41a0-96d4-8f431d629029"). InnerVolumeSpecName "kube-api-access-8nx5j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.223553 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-kube-api-access-75855" (OuterVolumeSpecName: "kube-api-access-75855") pod "5ccaedd0-63de-4f5b-9106-b556e01fa2b8" (UID: "5ccaedd0-63de-4f5b-9106-b556e01fa2b8"). InnerVolumeSpecName "kube-api-access-75855". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.419519 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf62556f-373c-41a0-96d4-8f431d629029-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf62556f-373c-41a0-96d4-8f431d629029" (UID: "bf62556f-373c-41a0-96d4-8f431d629029"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.421524 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ccaedd0-63de-4f5b-9106-b556e01fa2b8" (UID: "5ccaedd0-63de-4f5b-9106-b556e01fa2b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.438986 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439710 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439766 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf62556f-373c-41a0-96d4-8f431d629029-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439784 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-75855\" (UniqueName: \"kubernetes.io/projected/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-kube-api-access-75855\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439830 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf62556f-373c-41a0-96d4-8f431d629029-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439845 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439856 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439869 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nx5j\" (UniqueName: \"kubernetes.io/projected/bf62556f-373c-41a0-96d4-8f431d629029-kube-api-access-8nx5j\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439880 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ccaedd0-63de-4f5b-9106-b556e01fa2b8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439891 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439904 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.439916 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf62556f-373c-41a0-96d4-8f431d629029-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.440059 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:23.940030558 +0000 UTC m=+172.749983079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.504377 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj"] Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.504446 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-649d957586-ms9dj"] Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.505424 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.540715 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/12c7a0a2-e5fd-411f-806a-d230792a9422-kube-api-access-gt9d9\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.540769 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c7a0a2-e5fd-411f-806a-d230792a9422-tmp\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.540800 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c7a0a2-e5fd-411f-806a-d230792a9422-serving-cert\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.540870 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-config\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.540897 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-client-ca\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.540924 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.541334 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.041318181 +0000 UTC m=+172.851270692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.666355 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.666582 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-config\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.666625 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-client-ca\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.666697 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.16666913 +0000 UTC m=+172.976621641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.666760 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/12c7a0a2-e5fd-411f-806a-d230792a9422-kube-api-access-gt9d9\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.666793 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c7a0a2-e5fd-411f-806a-d230792a9422-tmp\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.666896 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c7a0a2-e5fd-411f-806a-d230792a9422-serving-cert\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.667965 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-client-ca\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.669333 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c7a0a2-e5fd-411f-806a-d230792a9422-tmp\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.669519 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-config\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.676581 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c7a0a2-e5fd-411f-806a-d230792a9422-serving-cert\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.686036 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:23 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:23 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:23 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.686118 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.769445 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.770450 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.270424601 +0000 UTC m=+173.080377112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.807097 5123 generic.go:358] "Generic (PLEG): container finished" podID="09029143-1cd7-445a-bcff-2e8cd5d5a8b9" containerID="7ded8173ab1b2cd634e956845e2874acd9535d92a81763ba4e069e186d5aa9f0" exitCode=0 Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.884749 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.885032 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.384989541 +0000 UTC m=+173.194942052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:23 crc kubenswrapper[5123]: I1212 15:22:23.885413 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:23 crc kubenswrapper[5123]: E1212 15:22:23.886012 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.385989383 +0000 UTC m=+173.195941944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.033449 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.033841 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.533811438 +0000 UTC m=+173.343763949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.033922 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.034419 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.534404627 +0000 UTC m=+173.344357138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.039766 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/12c7a0a2-e5fd-411f-806a-d230792a9422-kube-api-access-gt9d9\") pod \"route-controller-manager-dd69b4f99-vs9nj\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.059610 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.093575 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" event={"ID":"bf62556f-373c-41a0-96d4-8f431d629029","Type":"ContainerDied","Data":"3084e90459e4b4a542b5b0abfe31502eea6f20c3a24630ee53f25e83535a882c"} Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.093680 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-649d957586-ms9dj"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.093816 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-t4m4d" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.094711 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.095732 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.097291 5123 scope.go:117] "RemoveContainer" containerID="375a45d9be110aa8622d28c0a201e3267a214e3ecf249ce0a329e603a08e0a31" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.114633 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"09029143-1cd7-445a-bcff-2e8cd5d5a8b9","Type":"ContainerDied","Data":"7ded8173ab1b2cd634e956845e2874acd9535d92a81763ba4e069e186d5aa9f0"} Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.114729 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699" event={"ID":"5ccaedd0-63de-4f5b-9106-b556e01fa2b8","Type":"ContainerDied","Data":"25dbc077748cec36b1c8149f6634d3b1cf728585204f3e9f38627bf3abde4f3f"} Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.114748 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-649d957586-ms9dj"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.114773 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.119517 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.122253 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.122317 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.122364 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.126314 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.126662 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.135509 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.135614 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.635589936 +0000 UTC m=+173.445542447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.135944 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.136311 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.636298939 +0000 UTC m=+173.446251450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.153631 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.158043 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-4dlxw proxy-ca-bundles serving-cert tmp], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" podUID="a7c04d65-b256-41f8-ad71-a599942be2fc" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.158644 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.170590 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-dc699"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.200542 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-t4m4d"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.201882 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-t4m4d"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.219503 5123 scope.go:117] "RemoveContainer" containerID="218e8250c7d67e4455cae7e1da72301027c81e75cb106ecd51e292579e615970" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.236875 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.237141 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.737096656 +0000 UTC m=+173.547049167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.237338 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.237521 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7c04d65-b256-41f8-ad71-a599942be2fc-tmp\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.237569 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-proxy-ca-bundles\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.237714 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-client-ca\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.237749 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dlxw\" (UniqueName: \"kubernetes.io/projected/a7c04d65-b256-41f8-ad71-a599942be2fc-kube-api-access-4dlxw\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.237873 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-config\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.237908 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.737880571 +0000 UTC m=+173.547833092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.237992 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c04d65-b256-41f8-ad71-a599942be2fc-serving-cert\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.344705 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.345035 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7c04d65-b256-41f8-ad71-a599942be2fc-tmp\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.345069 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-proxy-ca-bundles\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.345125 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-client-ca\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.345153 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4dlxw\" (UniqueName: \"kubernetes.io/projected/a7c04d65-b256-41f8-ad71-a599942be2fc-kube-api-access-4dlxw\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.345298 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.845252676 +0000 UTC m=+173.655205187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.345405 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-config\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.345467 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c04d65-b256-41f8-ad71-a599942be2fc-serving-cert\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.346006 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7c04d65-b256-41f8-ad71-a599942be2fc-tmp\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.348140 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-config\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.348172 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-proxy-ca-bundles\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.348944 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-client-ca\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.352512 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c04d65-b256-41f8-ad71-a599942be2fc-serving-cert\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.363824 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dlxw\" (UniqueName: \"kubernetes.io/projected/a7c04d65-b256-41f8-ad71-a599942be2fc-kube-api-access-4dlxw\") pod \"controller-manager-649d957586-ms9dj\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.453377 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.453924 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:24.95389728 +0000 UTC m=+173.763849831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.465556 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj"] Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.554363 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.554604 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.054555814 +0000 UTC m=+173.864508325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.554829 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.555359 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.055342149 +0000 UTC m=+173.865294660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.656328 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.656597 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.156545208 +0000 UTC m=+173.966497719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.656825 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.657408 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.157388015 +0000 UTC m=+173.967340566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.686281 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:24 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:24 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:24 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.686406 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.758685 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.758971 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.258908166 +0000 UTC m=+174.068860677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.759536 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.760163 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.260145404 +0000 UTC m=+174.070097915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.834642 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" event={"ID":"12c7a0a2-e5fd-411f-806a-d230792a9422","Type":"ContainerStarted","Data":"c39cb63d53e15af5d95a4cc2fd82631e186e8d2ec3a2048e747d4878738fc9c0"} Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.834723 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.852430 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.860731 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c04d65-b256-41f8-ad71-a599942be2fc-serving-cert\") pod \"a7c04d65-b256-41f8-ad71-a599942be2fc\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.860907 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.860944 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-config\") pod \"a7c04d65-b256-41f8-ad71-a599942be2fc\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.860991 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-client-ca\") pod \"a7c04d65-b256-41f8-ad71-a599942be2fc\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861082 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dlxw\" (UniqueName: \"kubernetes.io/projected/a7c04d65-b256-41f8-ad71-a599942be2fc-kube-api-access-4dlxw\") pod \"a7c04d65-b256-41f8-ad71-a599942be2fc\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861117 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-proxy-ca-bundles\") pod \"a7c04d65-b256-41f8-ad71-a599942be2fc\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861156 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7c04d65-b256-41f8-ad71-a599942be2fc-tmp\") pod \"a7c04d65-b256-41f8-ad71-a599942be2fc\" (UID: \"a7c04d65-b256-41f8-ad71-a599942be2fc\") " Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.861200 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.361153518 +0000 UTC m=+174.171106029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861556 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c04d65-b256-41f8-ad71-a599942be2fc-tmp" (OuterVolumeSpecName: "tmp") pod "a7c04d65-b256-41f8-ad71-a599942be2fc" (UID: "a7c04d65-b256-41f8-ad71-a599942be2fc"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861704 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861725 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-client-ca" (OuterVolumeSpecName: "client-ca") pod "a7c04d65-b256-41f8-ad71-a599942be2fc" (UID: "a7c04d65-b256-41f8-ad71-a599942be2fc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861736 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a7c04d65-b256-41f8-ad71-a599942be2fc" (UID: "a7c04d65-b256-41f8-ad71-a599942be2fc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.861883 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7c04d65-b256-41f8-ad71-a599942be2fc-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.862264 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.362254333 +0000 UTC m=+174.172206844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.862431 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-config" (OuterVolumeSpecName: "config") pod "a7c04d65-b256-41f8-ad71-a599942be2fc" (UID: "a7c04d65-b256-41f8-ad71-a599942be2fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.865990 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c04d65-b256-41f8-ad71-a599942be2fc-kube-api-access-4dlxw" (OuterVolumeSpecName: "kube-api-access-4dlxw") pod "a7c04d65-b256-41f8-ad71-a599942be2fc" (UID: "a7c04d65-b256-41f8-ad71-a599942be2fc"). InnerVolumeSpecName "kube-api-access-4dlxw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.866599 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c04d65-b256-41f8-ad71-a599942be2fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a7c04d65-b256-41f8-ad71-a599942be2fc" (UID: "a7c04d65-b256-41f8-ad71-a599942be2fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.963147 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.963799 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c04d65-b256-41f8-ad71-a599942be2fc-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.963835 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.963851 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.963876 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4dlxw\" (UniqueName: \"kubernetes.io/projected/a7c04d65-b256-41f8-ad71-a599942be2fc-kube-api-access-4dlxw\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:24 crc kubenswrapper[5123]: I1212 15:22:24.963896 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c04d65-b256-41f8-ad71-a599942be2fc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:24 crc kubenswrapper[5123]: E1212 15:22:24.964114 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.464026832 +0000 UTC m=+174.273979343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.065481 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.066412 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.566382808 +0000 UTC m=+174.376335329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.167713 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.168521 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.668489588 +0000 UTC m=+174.478442099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.302411 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.302918 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.802890091 +0000 UTC m=+174.612842602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.404211 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.404575 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.904519775 +0000 UTC m=+174.714472286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.404989 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.405450 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:25.905429703 +0000 UTC m=+174.715382214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.493692 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.506973 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.507770 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.007730648 +0000 UTC m=+174.817683159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.608146 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kube-api-access\") pod \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.608584 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kubelet-dir\") pod \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\" (UID: \"09029143-1cd7-445a-bcff-2e8cd5d5a8b9\") " Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.608856 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.609372 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.109352822 +0000 UTC m=+174.919305343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.610465 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "09029143-1cd7-445a-bcff-2e8cd5d5a8b9" (UID: "09029143-1cd7-445a-bcff-2e8cd5d5a8b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.630058 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "09029143-1cd7-445a-bcff-2e8cd5d5a8b9" (UID: "09029143-1cd7-445a-bcff-2e8cd5d5a8b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.656709 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ccaedd0-63de-4f5b-9106-b556e01fa2b8" path="/var/lib/kubelet/pods/5ccaedd0-63de-4f5b-9106-b556e01fa2b8/volumes" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.657824 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf62556f-373c-41a0-96d4-8f431d629029" path="/var/lib/kubelet/pods/bf62556f-373c-41a0-96d4-8f431d629029/volumes" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.686449 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:25 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:25 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:25 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.686734 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.711184 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.711444 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.211394419 +0000 UTC m=+175.021346920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.711751 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.711956 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.711992 5123 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09029143-1cd7-445a-bcff-2e8cd5d5a8b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.712812 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.212791243 +0000 UTC m=+175.022743754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.814266 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.814749 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.314685655 +0000 UTC m=+175.124638166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.815108 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.815836 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.31580433 +0000 UTC m=+175.125756861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.921436 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:25 crc kubenswrapper[5123]: E1212 15:22:25.922048 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.422017998 +0000 UTC m=+175.231970519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.940684 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.941429 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f" containerID="b403c0a969caf6bcd8e5eba1fbf835dfe54c0840cefd7e0be46dd52e5f32a859" exitCode=0 Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.941665 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f","Type":"ContainerDied","Data":"b403c0a969caf6bcd8e5eba1fbf835dfe54c0840cefd7e0be46dd52e5f32a859"} Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.947169 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-bmckw" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.949870 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.949896 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-649d957586-ms9dj" Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.949943 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"09029143-1cd7-445a-bcff-2e8cd5d5a8b9","Type":"ContainerDied","Data":"9e270e7b5f59bce45ac70f9b6e74446ac499511192820eb24524bc5c2b9590bb"} Dec 12 15:22:25 crc kubenswrapper[5123]: I1212 15:22:25.949996 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e270e7b5f59bce45ac70f9b6e74446ac499511192820eb24524bc5c2b9590bb" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.023402 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.025402 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.525376996 +0000 UTC m=+175.335329507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.133518 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.134160 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.634132254 +0000 UTC m=+175.444084775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.139515 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d49859f95-pcm7k"] Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.140410 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09029143-1cd7-445a-bcff-2e8cd5d5a8b9" containerName="pruner" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.140440 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="09029143-1cd7-445a-bcff-2e8cd5d5a8b9" containerName="pruner" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.140602 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="09029143-1cd7-445a-bcff-2e8cd5d5a8b9" containerName="pruner" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.274142 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.274619 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.774599338 +0000 UTC m=+175.584551849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.420281 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.420681 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.920638448 +0000 UTC m=+175.730590959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.421333 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.421910 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:26.921897028 +0000 UTC m=+175.731849539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.467706 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.467852 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.522860 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.523394 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.023361847 +0000 UTC m=+175.833314358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.624984 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.625912 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.125889868 +0000 UTC m=+175.935842379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.702579 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-649d957586-ms9dj"] Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.702663 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-649d957586-ms9dj"] Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.702709 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d49859f95-pcm7k"] Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.702792 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.782631 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:26 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:26 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:26 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.782778 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.796932 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.797493 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.797761 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.798023 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.798252 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.798493 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.800303 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.800723 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-client-ca\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.800766 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a8c257-f895-4044-aec0-ea9cb012126e-serving-cert\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.800803 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7br\" (UniqueName: \"kubernetes.io/projected/01a8c257-f895-4044-aec0-ea9cb012126e-kube-api-access-tm7br\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.800853 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-proxy-ca-bundles\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.800910 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-config\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.800955 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01a8c257-f895-4044-aec0-ea9cb012126e-tmp\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.801273 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.301237689 +0000 UTC m=+176.111190200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.824355 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.902196 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-config\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.902272 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01a8c257-f895-4044-aec0-ea9cb012126e-tmp\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.902358 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.902379 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-client-ca\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.902397 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a8c257-f895-4044-aec0-ea9cb012126e-serving-cert\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.902433 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7br\" (UniqueName: \"kubernetes.io/projected/01a8c257-f895-4044-aec0-ea9cb012126e-kube-api-access-tm7br\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.902468 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-proxy-ca-bundles\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: E1212 15:22:26.903599 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.403581435 +0000 UTC m=+176.213533946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.904146 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01a8c257-f895-4044-aec0-ea9cb012126e-tmp\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.905175 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-client-ca\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.905201 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-proxy-ca-bundles\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.905185 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-config\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.918927 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a8c257-f895-4044-aec0-ea9cb012126e-serving-cert\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.927313 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7br\" (UniqueName: \"kubernetes.io/projected/01a8c257-f895-4044-aec0-ea9cb012126e-kube-api-access-tm7br\") pod \"controller-manager-5d49859f95-pcm7k\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:26 crc kubenswrapper[5123]: I1212 15:22:26.968180 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" event={"ID":"12c7a0a2-e5fd-411f-806a-d230792a9422","Type":"ContainerStarted","Data":"5ea995ab714d74db5475f0686a93d97b3714d92e036d47e083ab0e8517fa506e"} Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.067976 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.068155 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.568119356 +0000 UTC m=+176.378071867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.068626 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.069050 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.569041595 +0000 UTC m=+176.378994106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.300578 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.301472 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.302304 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.802206622 +0000 UTC m=+176.612159133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.302650 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.303280 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.803267636 +0000 UTC m=+176.613220147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.416466 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.417638 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:27.917610229 +0000 UTC m=+176.727562740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.519376 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.519827 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:28.019808902 +0000 UTC m=+176.829761403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.631377 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.633361 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:28.133327648 +0000 UTC m=+176.943280159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.775246 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:27 crc kubenswrapper[5123]: E1212 15:22:27.776079 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:28.276058864 +0000 UTC m=+177.086011375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.777325 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:27 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:27 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:27 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:27 crc kubenswrapper[5123]: I1212 15:22:27.777388 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:27.879322 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:27.881350 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:28.381312323 +0000 UTC m=+177.191264834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.158604 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:28.159116 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:28.659092461 +0000 UTC m=+177.469044972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.260108 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:28.260827 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:28.760783808 +0000 UTC m=+177.570736339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.365695 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:28.366987 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:28.86620735 +0000 UTC m=+177.676159861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.600008 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:28.600515 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.100485901 +0000 UTC m=+177.910438412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.613322 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerName="route-controller-manager" containerID="cri-o://5ea995ab714d74db5475f0686a93d97b3714d92e036d47e083ab0e8517fa506e" gracePeriod=30 Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.631674 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c04d65-b256-41f8-ad71-a599942be2fc" path="/var/lib/kubelet/pods/a7c04d65-b256-41f8-ad71-a599942be2fc/volumes" Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.632635 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.731892 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.733002 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:28.733525 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.233506523 +0000 UTC m=+178.043459044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.738925 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" podStartSLOduration=6.738909293 podStartE2EDuration="6.738909293s" podCreationTimestamp="2025-12-12 15:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:22:28.736463016 +0000 UTC m=+177.546415537" watchObservedRunningTime="2025-12-12 15:22:28.738909293 +0000 UTC m=+177.548861804" Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.741574 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:28 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:28 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:28 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.741630 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.834978 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:28.835539 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.33551092 +0000 UTC m=+178.145463431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:28 crc kubenswrapper[5123]: I1212 15:22:28.962179 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:28 crc kubenswrapper[5123]: E1212 15:22:28.962784 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.462764618 +0000 UTC m=+178.272717129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.063469 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.063877 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.563852505 +0000 UTC m=+178.373805016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.165086 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.165843 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.66581861 +0000 UTC m=+178.475771121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.243832 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.243926 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.266588 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.266830 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.766792283 +0000 UTC m=+178.576744794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.267003 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.267854 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:29.767842426 +0000 UTC m=+178.577794947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.618899 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.619805 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.119732463 +0000 UTC m=+178.929684974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.622788 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.623851 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.123826651 +0000 UTC m=+178.933779172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.689718 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:29 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:29 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:29 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.689804 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.710664 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d49859f95-pcm7k"] Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.726010 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.726273 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.226225954 +0000 UTC m=+179.036178465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.727207 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.727728 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.227710131 +0000 UTC m=+179.037662662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.843918 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.844099 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.344065262 +0000 UTC m=+179.154017773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.844878 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.845414 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.345390324 +0000 UTC m=+179.155342835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.946173 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.946448 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.446413513 +0000 UTC m=+179.256366024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:29 crc kubenswrapper[5123]: I1212 15:22:29.946738 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:29 crc kubenswrapper[5123]: E1212 15:22:29.947397 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.447388884 +0000 UTC m=+179.257341395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.049982 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.050493 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.550459048 +0000 UTC m=+179.360411559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.194264 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.194761 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.694743705 +0000 UTC m=+179.504696216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.242683 5123 patch_prober.go:28] interesting pod/console-64d44f6ddf-96rdx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.242777 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-96rdx" podUID="7ff811e4-3864-456b-8e00-b9e2d1c49ed8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.295339 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.295383 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.795351317 +0000 UTC m=+179.605303828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.295625 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.296004 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.795995937 +0000 UTC m=+179.605948448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.413597 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.413995 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:30.913969215 +0000 UTC m=+179.723921726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.515050 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.515596 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.015574039 +0000 UTC m=+179.825526550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.616325 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.616819 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.116777008 +0000 UTC m=+179.926729519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.718194 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.718815 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.218771724 +0000 UTC m=+180.028724235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.819643 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.820072 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.320043847 +0000 UTC m=+180.129996358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.836335 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:30 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:30 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:30 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.836404 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.857642 5123 generic.go:358] "Generic (PLEG): container finished" podID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerID="5ea995ab714d74db5475f0686a93d97b3714d92e036d47e083ab0e8517fa506e" exitCode=0 Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.857811 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" event={"ID":"12c7a0a2-e5fd-411f-806a-d230792a9422","Type":"ContainerDied","Data":"5ea995ab714d74db5475f0686a93d97b3714d92e036d47e083ab0e8517fa506e"} Dec 12 15:22:30 crc kubenswrapper[5123]: I1212 15:22:30.922675 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:30 crc kubenswrapper[5123]: E1212 15:22:30.924078 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.424053716 +0000 UTC m=+180.234006227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.026002 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.026476 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.526442013 +0000 UTC m=+180.336394524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.128010 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.130438 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.630368299 +0000 UTC m=+180.440320810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.229920 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.230824 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.730773294 +0000 UTC m=+180.540725815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.230930 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.231488 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.731465226 +0000 UTC m=+180.541417737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.332758 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.333431 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.833403429 +0000 UTC m=+180.643355950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.434642 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.435506 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:31.935474168 +0000 UTC m=+180.745426689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.536701 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.537005 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.036956487 +0000 UTC m=+180.846908998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.537338 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.537943 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.037919847 +0000 UTC m=+180.847872358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.639390 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.639544 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.13951336 +0000 UTC m=+180.949465881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.639827 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.640279 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.140265404 +0000 UTC m=+180.950217925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.685049 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:31 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:31 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:31 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.685517 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.740783 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.740979 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.240942847 +0000 UTC m=+181.050895348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.741652 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.742677 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.242661091 +0000 UTC m=+181.052613602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.843283 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.843596 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.343558272 +0000 UTC m=+181.153510783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.844049 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.844523 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.344506762 +0000 UTC m=+181.154459273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.945447 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.945690 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.445647031 +0000 UTC m=+181.255599552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:31 crc kubenswrapper[5123]: I1212 15:22:31.946322 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:31 crc kubenswrapper[5123]: E1212 15:22:31.947094 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.447079575 +0000 UTC m=+181.257032086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.048563 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.048761 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.54873176 +0000 UTC m=+181.358684281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.048863 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.049406 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.549385841 +0000 UTC m=+181.359338352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.150486 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.151067 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.651039085 +0000 UTC m=+181.460991596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.252873 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.253417 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.753387772 +0000 UTC m=+181.563340283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.395142 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.395536 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.895496358 +0000 UTC m=+181.705448869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.498015 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.498535 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:32.998516895 +0000 UTC m=+181.808469406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.599865 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.600447 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.100418948 +0000 UTC m=+181.910371459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.686354 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:32 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:32 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:32 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.686446 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.702074 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.702679 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.2026393 +0000 UTC m=+182.012591851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.804245 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.805010 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.304945626 +0000 UTC m=+182.114898137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.887257 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" event={"ID":"68ef1469-eefc-4e7d-b8a5-bf0550b84694","Type":"ContainerStarted","Data":"2a5d832dcb972cdb03556f2907cf9c8f0cb6288ea9d935164df2025a11a0570e"} Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.906431 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:32 crc kubenswrapper[5123]: E1212 15:22:32.906949 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.40692931 +0000 UTC m=+182.216881821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:32 crc kubenswrapper[5123]: I1212 15:22:32.916539 5123 ???:1] "http: TLS handshake error from 192.168.126.11:53160: no serving certificate available for the kubelet" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.007449 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.007795 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.507741618 +0000 UTC m=+182.317694129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.008941 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.009437 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.509416321 +0000 UTC m=+182.319368842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.111049 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.111748 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.611717356 +0000 UTC m=+182.421669867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.212844 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.213590 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.713571557 +0000 UTC m=+182.523524068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.298379 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.320001 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.320308 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.82026355 +0000 UTC m=+182.630216061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.320647 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.321283 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.821269051 +0000 UTC m=+182.631221592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.422349 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kubelet-dir\") pod \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.422441 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kube-api-access\") pod \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\" (UID: \"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f\") " Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.422635 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.423144 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:33.923121362 +0000 UTC m=+182.733073873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.423131 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f" (UID: "4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.491887 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f" (UID: "4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.570435 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.570782 5123 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.571863 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.572400 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.072378104 +0000 UTC m=+182.882330615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.673307 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.673649 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.173588154 +0000 UTC m=+182.983540665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.674018 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.674740 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.174706429 +0000 UTC m=+182.984658940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.689515 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:33 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:33 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:33 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.689703 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.775727 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.776164 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.276137757 +0000 UTC m=+183.086090268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.878123 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.879695 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.37966097 +0000 UTC m=+183.189613491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.899293 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f","Type":"ContainerDied","Data":"40e73eb80cb2006d443c6ae71201796bf001ac67610d543fa39fa6cd8d434517"} Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.899346 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.899372 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40e73eb80cb2006d443c6ae71201796bf001ac67610d543fa39fa6cd8d434517" Dec 12 15:22:33 crc kubenswrapper[5123]: I1212 15:22:33.981650 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:33 crc kubenswrapper[5123]: E1212 15:22:33.981958 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.481935204 +0000 UTC m=+183.291887715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.083564 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.084183 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.584149106 +0000 UTC m=+183.394101617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.185953 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.186091 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.686059759 +0000 UTC m=+183.496012270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.186537 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.186969 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.686956357 +0000 UTC m=+183.496908878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.287541 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.287719 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.787692583 +0000 UTC m=+183.597645094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.288037 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.288569 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.78853679 +0000 UTC m=+183.598489301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.389323 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.389560 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.889503253 +0000 UTC m=+183.699455764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.491922 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.492581 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:34.992548221 +0000 UTC m=+183.802500762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.593202 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.593448 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.093410331 +0000 UTC m=+183.903362842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.593959 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.594504 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.094485475 +0000 UTC m=+183.904437986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.635774 5123 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.687028 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:34 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:34 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:34 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.687127 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.694873 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.695084 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.195044755 +0000 UTC m=+184.004997266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.695583 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.696033 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.196021115 +0000 UTC m=+184.005973626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:34 crc kubenswrapper[5123]: I1212 15:22:34.930173 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:34 crc kubenswrapper[5123]: E1212 15:22:34.930917 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.430891106 +0000 UTC m=+184.240843617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.032726 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.033414 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.533391118 +0000 UTC m=+184.343343629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.134866 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.135243 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.635181386 +0000 UTC m=+184.445133897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.135643 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.136105 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.636085985 +0000 UTC m=+184.446038496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.237067 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.237316 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.737275175 +0000 UTC m=+184.547227696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.237549 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.238010 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.737991558 +0000 UTC m=+184.547944069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.339124 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.339412 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.839360794 +0000 UTC m=+184.649313315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.339970 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.340408 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.840392466 +0000 UTC m=+184.650344977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.441343 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.441509 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.941479593 +0000 UTC m=+184.751432104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.441655 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.441967 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:35.941959928 +0000 UTC m=+184.751912439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.543151 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.543428 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 15:22:36.043388796 +0000 UTC m=+184.853341307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.543687 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: E1212 15:22:35.544106 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:22:36.044087967 +0000 UTC m=+184.854040478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ts2mt" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.626030 5123 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T15:22:34.635814923Z","UUID":"fbaf9ab2-6a1a-4389-872a-c69036eb7260","Handler":null,"Name":"","Endpoint":""} Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.635330 5123 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.635387 5123 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.649067 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.682678 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.700371 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:35 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:35 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:35 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.700461 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.705439 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.751536 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.834043 5123 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.834114 5123 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:35 crc kubenswrapper[5123]: I1212 15:22:35.973435 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ts2mt\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:36 crc kubenswrapper[5123]: I1212 15:22:36.246511 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 15:22:36 crc kubenswrapper[5123]: I1212 15:22:36.254914 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:22:36 crc kubenswrapper[5123]: I1212 15:22:36.463650 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:36 crc kubenswrapper[5123]: I1212 15:22:36.463739 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:36 crc kubenswrapper[5123]: I1212 15:22:36.686614 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:36 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:36 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:36 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:36 crc kubenswrapper[5123]: I1212 15:22:36.686712 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:37 crc kubenswrapper[5123]: I1212 15:22:37.688112 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:37 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:37 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:37 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:37 crc kubenswrapper[5123]: I1212 15:22:37.698121 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:38 crc kubenswrapper[5123]: I1212 15:22:38.001206 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" event={"ID":"01a8c257-f895-4044-aec0-ea9cb012126e","Type":"ContainerStarted","Data":"387d633388ff76cbb1e462b982d1faacba77ac90a9766935db55a4dbd9c54c86"} Dec 12 15:22:38 crc kubenswrapper[5123]: I1212 15:22:38.473735 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 15:22:38 crc kubenswrapper[5123]: I1212 15:22:38.764096 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:38 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:38 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:38 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:38 crc kubenswrapper[5123]: I1212 15:22:38.764702 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.243262 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.243621 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.243724 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.245712 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.245867 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.248357 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8c5617ac20c35a29d82dc82e0862bfeca794ccb2aa8a303b5c55da1094a7bf3b"} pod="openshift-console/downloads-747b44746d-xhd9t" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.248501 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" containerID="cri-o://8c5617ac20c35a29d82dc82e0862bfeca794ccb2aa8a303b5c55da1094a7bf3b" gracePeriod=2 Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.616188 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9qnbt" Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.693638 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:39 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:39 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:39 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.693813 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.809563 5123 patch_prober.go:28] interesting pod/route-controller-manager-dd69b4f99-vs9nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:22:39 crc kubenswrapper[5123]: I1212 15:22:39.809730 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:22:40 crc kubenswrapper[5123]: I1212 15:22:40.027496 5123 generic.go:358] "Generic (PLEG): container finished" podID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerID="8c5617ac20c35a29d82dc82e0862bfeca794ccb2aa8a303b5c55da1094a7bf3b" exitCode=0 Dec 12 15:22:40 crc kubenswrapper[5123]: I1212 15:22:40.027739 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerDied","Data":"8c5617ac20c35a29d82dc82e0862bfeca794ccb2aa8a303b5c55da1094a7bf3b"} Dec 12 15:22:40 crc kubenswrapper[5123]: I1212 15:22:40.027802 5123 scope.go:117] "RemoveContainer" containerID="9f50991963d4d04bcf2e4c9451b3fae1c9ded45ced042c68035628b937492228" Dec 12 15:22:40 crc kubenswrapper[5123]: I1212 15:22:40.236345 5123 patch_prober.go:28] interesting pod/console-64d44f6ddf-96rdx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 12 15:22:40 crc kubenswrapper[5123]: I1212 15:22:40.236447 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-96rdx" podUID="7ff811e4-3864-456b-8e00-b9e2d1c49ed8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 12 15:22:40 crc kubenswrapper[5123]: I1212 15:22:40.686763 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:40 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:40 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:40 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:40 crc kubenswrapper[5123]: I1212 15:22:40.686911 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:41 crc kubenswrapper[5123]: I1212 15:22:41.688736 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:41 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:41 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:41 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:41 crc kubenswrapper[5123]: I1212 15:22:41.689412 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:42 crc kubenswrapper[5123]: I1212 15:22:42.685910 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:42 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:42 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:42 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:42 crc kubenswrapper[5123]: I1212 15:22:42.686089 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:43 crc kubenswrapper[5123]: I1212 15:22:43.690968 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:43 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:43 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:43 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:43 crc kubenswrapper[5123]: I1212 15:22:43.691153 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:44 crc kubenswrapper[5123]: I1212 15:22:44.690242 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:44 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:44 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:44 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:44 crc kubenswrapper[5123]: I1212 15:22:44.690661 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:45 crc kubenswrapper[5123]: I1212 15:22:45.687981 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:45 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:45 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:45 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:45 crc kubenswrapper[5123]: I1212 15:22:45.688088 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:46 crc kubenswrapper[5123]: I1212 15:22:46.685026 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:46 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:46 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:46 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:46 crc kubenswrapper[5123]: I1212 15:22:46.685103 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:47 crc kubenswrapper[5123]: I1212 15:22:47.686133 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:47 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:47 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:47 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:47 crc kubenswrapper[5123]: I1212 15:22:47.686258 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.119739 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:49 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:49 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:49 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.121267 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.245757 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.245870 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.616637 5123 patch_prober.go:28] interesting pod/route-controller-manager-dd69b4f99-vs9nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.616809 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.685805 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:49 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:49 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:49 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:49 crc kubenswrapper[5123]: I1212 15:22:49.685892 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:50 crc kubenswrapper[5123]: I1212 15:22:50.253605 5123 patch_prober.go:28] interesting pod/console-64d44f6ddf-96rdx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 12 15:22:50 crc kubenswrapper[5123]: I1212 15:22:50.253713 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-96rdx" podUID="7ff811e4-3864-456b-8e00-b9e2d1c49ed8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 12 15:22:50 crc kubenswrapper[5123]: I1212 15:22:50.689656 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:50 crc kubenswrapper[5123]: [-]has-synced failed: reason withheld Dec 12 15:22:50 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:50 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:50 crc kubenswrapper[5123]: I1212 15:22:50.689759 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:51 crc kubenswrapper[5123]: I1212 15:22:51.884587 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:22:51 crc kubenswrapper[5123]: [+]has-synced ok Dec 12 15:22:51 crc kubenswrapper[5123]: [+]process-running ok Dec 12 15:22:51 crc kubenswrapper[5123]: healthz check failed Dec 12 15:22:51 crc kubenswrapper[5123]: I1212 15:22:51.884912 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:22:52 crc kubenswrapper[5123]: I1212 15:22:52.687162 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:22:52 crc kubenswrapper[5123]: I1212 15:22:52.690105 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" Dec 12 15:22:53 crc kubenswrapper[5123]: I1212 15:22:53.379841 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 15:22:53 crc kubenswrapper[5123]: I1212 15:22:53.381047 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f" containerName="pruner" Dec 12 15:22:53 crc kubenswrapper[5123]: I1212 15:22:53.381070 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f" containerName="pruner" Dec 12 15:22:53 crc kubenswrapper[5123]: I1212 15:22:53.381206 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ee797cc-c90f-4f08-ad20-2dcbdd16ab4f" containerName="pruner" Dec 12 15:22:54 crc kubenswrapper[5123]: I1212 15:22:54.913948 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 15:22:54 crc kubenswrapper[5123]: I1212 15:22:54.914540 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:54 crc kubenswrapper[5123]: I1212 15:22:54.919495 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 15:22:54 crc kubenswrapper[5123]: I1212 15:22:54.919814 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:22:54 crc kubenswrapper[5123]: I1212 15:22:54.932445 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:54 crc kubenswrapper[5123]: I1212 15:22:54.932558 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:55 crc kubenswrapper[5123]: I1212 15:22:55.033849 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:55 crc kubenswrapper[5123]: I1212 15:22:55.033917 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:55 crc kubenswrapper[5123]: I1212 15:22:55.034012 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:55 crc kubenswrapper[5123]: I1212 15:22:55.058370 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:55 crc kubenswrapper[5123]: I1212 15:22:55.245017 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.245689 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.246691 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.584927 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.621923 5123 patch_prober.go:28] interesting pod/route-controller-manager-dd69b4f99-vs9nj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": context deadline exceeded" start-of-body= Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.622021 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": context deadline exceeded" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.671304 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.693441 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.737923 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.738235 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-var-lock\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.738324 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688fe19e-6c1b-42c8-8245-da6b56af433f-kube-api-access\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.840481 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.840657 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-var-lock\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.840703 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.840830 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-var-lock\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.840986 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688fe19e-6c1b-42c8-8245-da6b56af433f-kube-api-access\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:22:59 crc kubenswrapper[5123]: I1212 15:22:59.934908 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688fe19e-6c1b-42c8-8245-da6b56af433f-kube-api-access\") pod \"installer-12-crc\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:23:00 crc kubenswrapper[5123]: I1212 15:23:00.008234 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:23:00 crc kubenswrapper[5123]: I1212 15:23:00.243328 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:23:00 crc kubenswrapper[5123]: I1212 15:23:00.253087 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-96rdx" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.594973 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.623882 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-client-ca\") pod \"12c7a0a2-e5fd-411f-806a-d230792a9422\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.624000 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c7a0a2-e5fd-411f-806a-d230792a9422-tmp\") pod \"12c7a0a2-e5fd-411f-806a-d230792a9422\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.624263 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-config\") pod \"12c7a0a2-e5fd-411f-806a-d230792a9422\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.624339 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/12c7a0a2-e5fd-411f-806a-d230792a9422-kube-api-access-gt9d9\") pod \"12c7a0a2-e5fd-411f-806a-d230792a9422\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.624371 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c7a0a2-e5fd-411f-806a-d230792a9422-serving-cert\") pod \"12c7a0a2-e5fd-411f-806a-d230792a9422\" (UID: \"12c7a0a2-e5fd-411f-806a-d230792a9422\") " Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.624850 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12c7a0a2-e5fd-411f-806a-d230792a9422-tmp" (OuterVolumeSpecName: "tmp") pod "12c7a0a2-e5fd-411f-806a-d230792a9422" (UID: "12c7a0a2-e5fd-411f-806a-d230792a9422"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.625281 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-client-ca" (OuterVolumeSpecName: "client-ca") pod "12c7a0a2-e5fd-411f-806a-d230792a9422" (UID: "12c7a0a2-e5fd-411f-806a-d230792a9422"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.627122 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-config" (OuterVolumeSpecName: "config") pod "12c7a0a2-e5fd-411f-806a-d230792a9422" (UID: "12c7a0a2-e5fd-411f-806a-d230792a9422"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.689833 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" event={"ID":"12c7a0a2-e5fd-411f-806a-d230792a9422","Type":"ContainerDied","Data":"c39cb63d53e15af5d95a4cc2fd82631e186e8d2ec3a2048e747d4878738fc9c0"} Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.689954 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.726312 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.726360 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c7a0a2-e5fd-411f-806a-d230792a9422-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:03 crc kubenswrapper[5123]: I1212 15:23:03.726370 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c7a0a2-e5fd-411f-806a-d230792a9422-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.301867 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c7a0a2-e5fd-411f-806a-d230792a9422-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12c7a0a2-e5fd-411f-806a-d230792a9422" (UID: "12c7a0a2-e5fd-411f-806a-d230792a9422"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.324399 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c7a0a2-e5fd-411f-806a-d230792a9422-kube-api-access-gt9d9" (OuterVolumeSpecName: "kube-api-access-gt9d9") pod "12c7a0a2-e5fd-411f-806a-d230792a9422" (UID: "12c7a0a2-e5fd-411f-806a-d230792a9422"). InnerVolumeSpecName "kube-api-access-gt9d9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.332309 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84"] Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.333530 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerName="route-controller-manager" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.333605 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerName="route-controller-manager" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.333847 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" containerName="route-controller-manager" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.392095 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/12c7a0a2-e5fd-411f-806a-d230792a9422-kube-api-access-gt9d9\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.392142 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c7a0a2-e5fd-411f-806a-d230792a9422-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.408913 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84"] Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.409029 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.494390 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-tmp\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.494856 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-config\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.494985 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-client-ca\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.495088 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-serving-cert\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.495132 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8qp6\" (UniqueName: \"kubernetes.io/projected/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-kube-api-access-g8qp6\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.538759 5123 scope.go:117] "RemoveContainer" containerID="5ea995ab714d74db5475f0686a93d97b3714d92e036d47e083ab0e8517fa506e" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.596509 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-tmp\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.596575 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-config\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.596637 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-client-ca\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.596692 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-serving-cert\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.596725 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g8qp6\" (UniqueName: \"kubernetes.io/projected/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-kube-api-access-g8qp6\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.599407 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-client-ca\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.599655 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-tmp\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.600054 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-config\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.613405 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-serving-cert\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.618239 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ts2mt"] Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.618213 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8qp6\" (UniqueName: \"kubernetes.io/projected/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-kube-api-access-g8qp6\") pod \"route-controller-manager-7bc9d579c5-4pc84\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.730480 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:04 crc kubenswrapper[5123]: I1212 15:23:04.750168 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" event={"ID":"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6","Type":"ContainerStarted","Data":"b1af30fc90582061b29e0e25741473074ca9bdb2bf34ac2107c10748b3b3460c"} Dec 12 15:23:05 crc kubenswrapper[5123]: I1212 15:23:05.128155 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 15:23:05 crc kubenswrapper[5123]: I1212 15:23:05.134107 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj"] Dec 12 15:23:05 crc kubenswrapper[5123]: I1212 15:23:05.147144 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd69b4f99-vs9nj"] Dec 12 15:23:05 crc kubenswrapper[5123]: I1212 15:23:05.338572 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 15:23:05 crc kubenswrapper[5123]: I1212 15:23:05.338906 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84"] Dec 12 15:23:05 crc kubenswrapper[5123]: I1212 15:23:05.670207 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c7a0a2-e5fd-411f-806a-d230792a9422" path="/var/lib/kubelet/pods/12c7a0a2-e5fd-411f-806a-d230792a9422/volumes" Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.962497 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbt5r" event={"ID":"402bc75d-15b2-46d8-9455-d2d8c8c7c47a","Type":"ContainerStarted","Data":"ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.975463 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerStarted","Data":"a908cc8c6181698a68068ba9c81b79d7aebb3baa0de66d0fd4bb4f3b2d783250"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.977903 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" event={"ID":"68ef1469-eefc-4e7d-b8a5-bf0550b84694","Type":"ContainerStarted","Data":"1cd0e56617316aee9668c794abe0642e1bb156654724498a1c656e1f2071cf9a"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.980411 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerStarted","Data":"1cb7ed849b0b323ebbde3ae601d9377223b207f869b11bb3be930864ab7d45cb"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.982608 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkqnl" event={"ID":"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a","Type":"ContainerStarted","Data":"77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.983779 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" event={"ID":"01a8c257-f895-4044-aec0-ea9cb012126e","Type":"ContainerStarted","Data":"3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.984563 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"688fe19e-6c1b-42c8-8245-da6b56af433f","Type":"ContainerStarted","Data":"7378fde10932bed8068b6b45ea17ba4b6b24565a4520b141df436583ec8aab77"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.985931 5123 generic.go:358] "Generic (PLEG): container finished" podID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerID="e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1" exitCode=0 Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.986002 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shltm" event={"ID":"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7","Type":"ContainerDied","Data":"e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.989859 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" event={"ID":"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5","Type":"ContainerStarted","Data":"37058c615e68f8124c64f7a1e5ffa26d077baed2c8ea9c6246e011c1e2a66551"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.991755 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rkl4" event={"ID":"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e","Type":"ContainerStarted","Data":"8837f0b3cc20380deb38e08eda5e23b508a5ad0f56c716bc8dfc0aa2f663c249"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:05.995326 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0fe093c7-c5cf-4e61-be4d-5c44545546d7","Type":"ContainerStarted","Data":"b4cdc6810de5a66a23a41a3882d040cdfd3a2ca22d893b17c30ba2574bb4b85a"} Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:06.101489 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:06.101550 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:06.101621 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:06.101672 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:06 crc kubenswrapper[5123]: I1212 15:23:06.239248 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" podStartSLOduration=43.239227485 podStartE2EDuration="43.239227485s" podCreationTimestamp="2025-12-12 15:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:06.238257475 +0000 UTC m=+215.048209986" watchObservedRunningTime="2025-12-12 15:23:06.239227485 +0000 UTC m=+215.049179996" Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.091680 5123 generic.go:358] "Generic (PLEG): container finished" podID="fb848e09-5c56-451f-a83b-d2e794432b47" containerID="a908cc8c6181698a68068ba9c81b79d7aebb3baa0de66d0fd4bb4f3b2d783250" exitCode=0 Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.093946 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerDied","Data":"a908cc8c6181698a68068ba9c81b79d7aebb3baa0de66d0fd4bb4f3b2d783250"} Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.101069 5123 patch_prober.go:28] interesting pod/controller-manager-5d49859f95-pcm7k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.101165 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" podUID="01a8c257-f895-4044-aec0-ea9cb012126e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.114606 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4cwn" event={"ID":"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56","Type":"ContainerStarted","Data":"ae80e3d77527f320397e045bb37bbe940cbb6dcccd786c6ae233e1b7ab83ecab"} Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.141720 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" event={"ID":"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6","Type":"ContainerStarted","Data":"6185e29ddf1a26fa821be58425f349b04d6f0bcd319ae8026df7f81fed1e5e3f"} Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.148325 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbdrq" event={"ID":"a077f03f-9a73-4019-912b-e2ebdf5308a5","Type":"ContainerStarted","Data":"88b3d44364f7f615e45d147078aee3317b0b602004fbb28493f7947bfc434284"} Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.382134 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjqk7" event={"ID":"78a70363-f10e-4d12-8279-c7f7f3b8402b","Type":"ContainerStarted","Data":"31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733"} Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.440825 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" event={"ID":"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5","Type":"ContainerStarted","Data":"58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14"} Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.444941 5123 generic.go:358] "Generic (PLEG): container finished" podID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerID="8837f0b3cc20380deb38e08eda5e23b508a5ad0f56c716bc8dfc0aa2f663c249" exitCode=0 Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.445028 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rkl4" event={"ID":"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e","Type":"ContainerDied","Data":"8837f0b3cc20380deb38e08eda5e23b508a5ad0f56c716bc8dfc0aa2f663c249"} Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.453001 5123 generic.go:358] "Generic (PLEG): container finished" podID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerID="ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86" exitCode=0 Dec 12 15:23:07 crc kubenswrapper[5123]: I1212 15:23:07.453096 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbt5r" event={"ID":"402bc75d-15b2-46d8-9455-d2d8c8c7c47a","Type":"ContainerDied","Data":"ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86"} Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.580561 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.583791 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.583929 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.583987 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.626054 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.688682 5123 generic.go:358] "Generic (PLEG): container finished" podID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerID="ae80e3d77527f320397e045bb37bbe940cbb6dcccd786c6ae233e1b7ab83ecab" exitCode=0 Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.690327 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4cwn" event={"ID":"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56","Type":"ContainerDied","Data":"ae80e3d77527f320397e045bb37bbe940cbb6dcccd786c6ae233e1b7ab83ecab"} Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.816702 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" podStartSLOduration=45.816676409 podStartE2EDuration="45.816676409s" podCreationTimestamp="2025-12-12 15:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:08.815374058 +0000 UTC m=+217.625326579" watchObservedRunningTime="2025-12-12 15:23:08.816676409 +0000 UTC m=+217.626628920" Dec 12 15:23:08 crc kubenswrapper[5123]: I1212 15:23:08.866598 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" podStartSLOduration=184.866563563 podStartE2EDuration="3m4.866563563s" podCreationTimestamp="2025-12-12 15:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:08.86553436 +0000 UTC m=+217.675486901" watchObservedRunningTime="2025-12-12 15:23:08.866563563 +0000 UTC m=+217.676516074" Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.141053 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.243067 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.243487 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.783134 5123 generic.go:358] "Generic (PLEG): container finished" podID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerID="31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733" exitCode=0 Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.783243 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjqk7" event={"ID":"78a70363-f10e-4d12-8279-c7f7f3b8402b","Type":"ContainerDied","Data":"31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.803915 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"688fe19e-6c1b-42c8-8245-da6b56af433f","Type":"ContainerStarted","Data":"b6659eb7ba57bf7fabad5d3481ec97cd40ed8cf64c70ce559e7474327d5a709f"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.817329 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shltm" event={"ID":"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7","Type":"ContainerStarted","Data":"0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.832825 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rkl4" event={"ID":"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e","Type":"ContainerStarted","Data":"4157de456a6422c5986f61239ff216e3ee850e4f7ac535c66dc521c8a3bd3dd6"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.839106 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0fe093c7-c5cf-4e61-be4d-5c44545546d7","Type":"ContainerStarted","Data":"63e78be8500a2f53c3ca45a03b6886e8cb5ba1864e54e9a23289ec894e397b1e"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.843073 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbt5r" event={"ID":"402bc75d-15b2-46d8-9455-d2d8c8c7c47a","Type":"ContainerStarted","Data":"713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.846360 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerStarted","Data":"2d952e519ba41bdb60267067cfd94ca88342fbfd5a2bfda54f3ae64f2d76b39b"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.849825 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" event={"ID":"68ef1469-eefc-4e7d-b8a5-bf0550b84694","Type":"ContainerStarted","Data":"34f3bb5b9136b2680a9241f86faafa8083db2a8280f940a23f1b14d84b990e63"} Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.963051 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:09 crc kubenswrapper[5123]: I1212 15:23:09.963147 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.043408 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8rkl4" podStartSLOduration=19.906698439 podStartE2EDuration="1m6.043382817s" podCreationTimestamp="2025-12-12 15:22:04 +0000 UTC" firstStartedPulling="2025-12-12 15:22:17.684670667 +0000 UTC m=+166.494623178" lastFinishedPulling="2025-12-12 15:23:03.821355045 +0000 UTC m=+212.631307556" observedRunningTime="2025-12-12 15:23:10.009511872 +0000 UTC m=+218.819464393" watchObservedRunningTime="2025-12-12 15:23:10.043382817 +0000 UTC m=+218.853335328" Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.079844 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-shltm" podStartSLOduration=21.086788249 podStartE2EDuration="1m7.079822112s" podCreationTimestamp="2025-12-12 15:22:03 +0000 UTC" firstStartedPulling="2025-12-12 15:22:17.680569498 +0000 UTC m=+166.490522019" lastFinishedPulling="2025-12-12 15:23:03.673603371 +0000 UTC m=+212.483555882" observedRunningTime="2025-12-12 15:23:10.044302076 +0000 UTC m=+218.854254597" watchObservedRunningTime="2025-12-12 15:23:10.079822112 +0000 UTC m=+218.889774623" Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.082045 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-g9nc4" podStartSLOduration=103.082026492 podStartE2EDuration="1m43.082026492s" podCreationTimestamp="2025-12-12 15:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:10.07812868 +0000 UTC m=+218.888081221" watchObservedRunningTime="2025-12-12 15:23:10.082026492 +0000 UTC m=+218.891979023" Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.131171 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sbt5r" podStartSLOduration=19.39248131 podStartE2EDuration="1m8.131136831s" podCreationTimestamp="2025-12-12 15:22:02 +0000 UTC" firstStartedPulling="2025-12-12 15:22:15.074278821 +0000 UTC m=+163.884231332" lastFinishedPulling="2025-12-12 15:23:03.812934352 +0000 UTC m=+212.622886853" observedRunningTime="2025-12-12 15:23:10.125733793 +0000 UTC m=+218.935686304" watchObservedRunningTime="2025-12-12 15:23:10.131136831 +0000 UTC m=+218.941089342" Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.159072 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=17.159042901 podStartE2EDuration="17.159042901s" podCreationTimestamp="2025-12-12 15:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:10.150378351 +0000 UTC m=+218.960330872" watchObservedRunningTime="2025-12-12 15:23:10.159042901 +0000 UTC m=+218.968995412" Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.182771 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=11.18274745 podStartE2EDuration="11.18274745s" podCreationTimestamp="2025-12-12 15:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:23:10.181906003 +0000 UTC m=+218.991858514" watchObservedRunningTime="2025-12-12 15:23:10.18274745 +0000 UTC m=+218.992699961" Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.862085 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4cwn" event={"ID":"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56","Type":"ContainerStarted","Data":"e033fb96dfd5dae3340ee0a11a0a89e1693a1941427d6956150b8c62c9446774"} Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.864249 5123 generic.go:358] "Generic (PLEG): container finished" podID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerID="77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5" exitCode=0 Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.864392 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkqnl" event={"ID":"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a","Type":"ContainerDied","Data":"77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5"} Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.872256 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjqk7" event={"ID":"78a70363-f10e-4d12-8279-c7f7f3b8402b","Type":"ContainerStarted","Data":"38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc"} Dec 12 15:23:10 crc kubenswrapper[5123]: I1212 15:23:10.957782 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d4cwn" podStartSLOduration=20.712071668 podStartE2EDuration="1m8.957725575s" podCreationTimestamp="2025-12-12 15:22:02 +0000 UTC" firstStartedPulling="2025-12-12 15:22:16.377654152 +0000 UTC m=+165.187606673" lastFinishedPulling="2025-12-12 15:23:04.623308069 +0000 UTC m=+213.433260580" observedRunningTime="2025-12-12 15:23:10.893589096 +0000 UTC m=+219.703541607" watchObservedRunningTime="2025-12-12 15:23:10.957725575 +0000 UTC m=+219.767678086" Dec 12 15:23:11 crc kubenswrapper[5123]: I1212 15:23:11.039079 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fjqk7" podStartSLOduration=21.686695615 podStartE2EDuration="1m9.039020567s" podCreationTimestamp="2025-12-12 15:22:02 +0000 UTC" firstStartedPulling="2025-12-12 15:22:16.464555763 +0000 UTC m=+165.274508274" lastFinishedPulling="2025-12-12 15:23:03.816880725 +0000 UTC m=+212.626833226" observedRunningTime="2025-12-12 15:23:10.959868891 +0000 UTC m=+219.769821422" watchObservedRunningTime="2025-12-12 15:23:11.039020567 +0000 UTC m=+219.848973078" Dec 12 15:23:11 crc kubenswrapper[5123]: I1212 15:23:11.892176 5123 generic.go:358] "Generic (PLEG): container finished" podID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerID="88b3d44364f7f615e45d147078aee3317b0b602004fbb28493f7947bfc434284" exitCode=0 Dec 12 15:23:11 crc kubenswrapper[5123]: I1212 15:23:11.892279 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbdrq" event={"ID":"a077f03f-9a73-4019-912b-e2ebdf5308a5","Type":"ContainerDied","Data":"88b3d44364f7f615e45d147078aee3317b0b602004fbb28493f7947bfc434284"} Dec 12 15:23:11 crc kubenswrapper[5123]: I1212 15:23:11.921633 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rqdb6" podStartSLOduration=23.339890522 podStartE2EDuration="1m9.921607804s" podCreationTimestamp="2025-12-12 15:22:02 +0000 UTC" firstStartedPulling="2025-12-12 15:22:17.762453011 +0000 UTC m=+166.572405522" lastFinishedPulling="2025-12-12 15:23:04.344170303 +0000 UTC m=+213.154122804" observedRunningTime="2025-12-12 15:23:11.058485854 +0000 UTC m=+219.868438375" watchObservedRunningTime="2025-12-12 15:23:11.921607804 +0000 UTC m=+220.731560315" Dec 12 15:23:14 crc kubenswrapper[5123]: I1212 15:23:14.075053 5123 ???:1] "http: TLS handshake error from 192.168.126.11:46744: no serving certificate available for the kubelet" Dec 12 15:23:14 crc kubenswrapper[5123]: I1212 15:23:14.545429 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:23:14 crc kubenswrapper[5123]: I1212 15:23:14.545920 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:23:14 crc kubenswrapper[5123]: I1212 15:23:14.926649 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkqnl" event={"ID":"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a","Type":"ContainerStarted","Data":"892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4"} Dec 12 15:23:14 crc kubenswrapper[5123]: I1212 15:23:14.930184 5123 generic.go:358] "Generic (PLEG): container finished" podID="0fe093c7-c5cf-4e61-be4d-5c44545546d7" containerID="63e78be8500a2f53c3ca45a03b6886e8cb5ba1864e54e9a23289ec894e397b1e" exitCode=0 Dec 12 15:23:14 crc kubenswrapper[5123]: I1212 15:23:14.930301 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0fe093c7-c5cf-4e61-be4d-5c44545546d7","Type":"ContainerDied","Data":"63e78be8500a2f53c3ca45a03b6886e8cb5ba1864e54e9a23289ec894e397b1e"} Dec 12 15:23:15 crc kubenswrapper[5123]: I1212 15:23:15.986542 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkqnl" podStartSLOduration=25.589798162 podStartE2EDuration="1m12.98650437s" podCreationTimestamp="2025-12-12 15:22:03 +0000 UTC" firstStartedPulling="2025-12-12 15:22:16.416153902 +0000 UTC m=+165.226106413" lastFinishedPulling="2025-12-12 15:23:03.81286012 +0000 UTC m=+212.622812621" observedRunningTime="2025-12-12 15:23:15.982098113 +0000 UTC m=+224.792050624" watchObservedRunningTime="2025-12-12 15:23:15.98650437 +0000 UTC m=+224.796457111" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.122213 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.122318 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.294561 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.294660 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.577658 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sbt5r" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="registry-server" probeResult="failure" output=< Dec 12 15:23:16 crc kubenswrapper[5123]: timeout: failed to connect service ":50051" within 1s Dec 12 15:23:16 crc kubenswrapper[5123]: > Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.825542 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.930968 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kubelet-dir\") pod \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.931199 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kube-api-access\") pod \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\" (UID: \"0fe093c7-c5cf-4e61-be4d-5c44545546d7\") " Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.931874 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0fe093c7-c5cf-4e61-be4d-5c44545546d7" (UID: "0fe093c7-c5cf-4e61-be4d-5c44545546d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.972795 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbdrq" event={"ID":"a077f03f-9a73-4019-912b-e2ebdf5308a5","Type":"ContainerStarted","Data":"8718bebb6bf009e7a8b5fa121fb7e6e2a87a73f30ad42a88445f4a9921822805"} Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.975238 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.975293 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.975302 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0fe093c7-c5cf-4e61-be4d-5c44545546d7","Type":"ContainerDied","Data":"b4cdc6810de5a66a23a41a3882d040cdfd3a2ca22d893b17c30ba2574bb4b85a"} Dec 12 15:23:16 crc kubenswrapper[5123]: I1212 15:23:16.975343 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4cdc6810de5a66a23a41a3882d040cdfd3a2ca22d893b17c30ba2574bb4b85a" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.033387 5123 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.088631 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0fe093c7-c5cf-4e61-be4d-5c44545546d7" (UID: "0fe093c7-c5cf-4e61-be4d-5c44545546d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.091633 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.116269 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.134605 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe093c7-c5cf-4e61-be4d-5c44545546d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.146182 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.712015 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.712113 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.753043 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.753146 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.754906 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.754975 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.820926 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:23:17 crc kubenswrapper[5123]: I1212 15:23:17.821549 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.039018 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.039558 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.045166 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbdrq" podStartSLOduration=26.959172822 podStartE2EDuration="1m14.04512344s" podCreationTimestamp="2025-12-12 15:22:04 +0000 UTC" firstStartedPulling="2025-12-12 15:22:17.516727999 +0000 UTC m=+166.326680510" lastFinishedPulling="2025-12-12 15:23:04.602678617 +0000 UTC m=+213.412631128" observedRunningTime="2025-12-12 15:23:18.038540824 +0000 UTC m=+226.848493355" watchObservedRunningTime="2025-12-12 15:23:18.04512344 +0000 UTC m=+226.855075961" Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.068282 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.176213 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.184285 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.792487 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4cwn"] Dec 12 15:23:18 crc kubenswrapper[5123]: I1212 15:23:18.849633 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkqnl" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="registry-server" probeResult="failure" output=< Dec 12 15:23:18 crc kubenswrapper[5123]: timeout: failed to connect service ":50051" within 1s Dec 12 15:23:18 crc kubenswrapper[5123]: > Dec 12 15:23:19 crc kubenswrapper[5123]: I1212 15:23:19.215850 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d4cwn" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="registry-server" containerID="cri-o://e033fb96dfd5dae3340ee0a11a0a89e1693a1941427d6956150b8c62c9446774" gracePeriod=2 Dec 12 15:23:19 crc kubenswrapper[5123]: I1212 15:23:19.908008 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:19 crc kubenswrapper[5123]: I1212 15:23:19.908253 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:19 crc kubenswrapper[5123]: I1212 15:23:19.939492 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:23:19 crc kubenswrapper[5123]: I1212 15:23:19.969980 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:19 crc kubenswrapper[5123]: I1212 15:23:19.971278 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:20 crc kubenswrapper[5123]: I1212 15:23:20.135155 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rkl4"] Dec 12 15:23:20 crc kubenswrapper[5123]: I1212 15:23:20.216299 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8rkl4" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="registry-server" containerID="cri-o://4157de456a6422c5986f61239ff216e3ee850e4f7ac535c66dc521c8a3bd3dd6" gracePeriod=2 Dec 12 15:23:21 crc kubenswrapper[5123]: I1212 15:23:21.153954 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqdb6"] Dec 12 15:23:21 crc kubenswrapper[5123]: I1212 15:23:21.154691 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rqdb6" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="registry-server" containerID="cri-o://2d952e519ba41bdb60267067cfd94ca88342fbfd5a2bfda54f3ae64f2d76b39b" gracePeriod=2 Dec 12 15:23:21 crc kubenswrapper[5123]: I1212 15:23:21.230803 5123 generic.go:358] "Generic (PLEG): container finished" podID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerID="e033fb96dfd5dae3340ee0a11a0a89e1693a1941427d6956150b8c62c9446774" exitCode=0 Dec 12 15:23:21 crc kubenswrapper[5123]: I1212 15:23:21.230905 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4cwn" event={"ID":"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56","Type":"ContainerDied","Data":"e033fb96dfd5dae3340ee0a11a0a89e1693a1941427d6956150b8c62c9446774"} Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.157036 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.241264 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-utilities\") pod \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.241463 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45lwq\" (UniqueName: \"kubernetes.io/projected/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-kube-api-access-45lwq\") pod \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.241583 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-catalog-content\") pod \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\" (UID: \"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56\") " Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.243084 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-utilities" (OuterVolumeSpecName: "utilities") pod "c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" (UID: "c772c7c7-2e1a-46a6-9b7d-e07aa2522d56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.304748 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4cwn" event={"ID":"c772c7c7-2e1a-46a6-9b7d-e07aa2522d56","Type":"ContainerDied","Data":"0a26bab4e793dea4776bc49dfa5b80efc381a66651e4316acd1b779023977569"} Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.304851 5123 scope.go:117] "RemoveContainer" containerID="e033fb96dfd5dae3340ee0a11a0a89e1693a1941427d6956150b8c62c9446774" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.305065 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4cwn" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.314879 5123 generic.go:358] "Generic (PLEG): container finished" podID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerID="4157de456a6422c5986f61239ff216e3ee850e4f7ac535c66dc521c8a3bd3dd6" exitCode=0 Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.315023 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rkl4" event={"ID":"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e","Type":"ContainerDied","Data":"4157de456a6422c5986f61239ff216e3ee850e4f7ac535c66dc521c8a3bd3dd6"} Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.409584 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.554409 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-kube-api-access-45lwq" (OuterVolumeSpecName: "kube-api-access-45lwq") pod "c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" (UID: "c772c7c7-2e1a-46a6-9b7d-e07aa2522d56"). InnerVolumeSpecName "kube-api-access-45lwq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.575023 5123 scope.go:117] "RemoveContainer" containerID="ae80e3d77527f320397e045bb37bbe940cbb6dcccd786c6ae233e1b7ab83ecab" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.625990 5123 scope.go:117] "RemoveContainer" containerID="99ab9f695d43d6110d72eff516a314cdc7a95bed2698616a7362090622b380d5" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.655491 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-45lwq\" (UniqueName: \"kubernetes.io/projected/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-kube-api-access-45lwq\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.770565 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.858340 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-utilities\") pod \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.858447 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqtfc\" (UniqueName: \"kubernetes.io/projected/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-kube-api-access-zqtfc\") pod \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.858521 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-catalog-content\") pod \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\" (UID: \"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e\") " Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.862796 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-utilities" (OuterVolumeSpecName: "utilities") pod "e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" (UID: "e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.872069 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" (UID: "e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.910903 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-kube-api-access-zqtfc" (OuterVolumeSpecName: "kube-api-access-zqtfc") pod "e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" (UID: "e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e"). InnerVolumeSpecName "kube-api-access-zqtfc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.961053 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zqtfc\" (UniqueName: \"kubernetes.io/projected/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-kube-api-access-zqtfc\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.961107 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:22 crc kubenswrapper[5123]: I1212 15:23:22.961124 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.324910 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rkl4" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.324910 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rkl4" event={"ID":"e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e","Type":"ContainerDied","Data":"2d57bc3c969d150a1e42c66401272e08ec36a042f14f42c2448d401aa89747ea"} Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.325069 5123 scope.go:117] "RemoveContainer" containerID="4157de456a6422c5986f61239ff216e3ee850e4f7ac535c66dc521c8a3bd3dd6" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.343602 5123 generic.go:358] "Generic (PLEG): container finished" podID="fb848e09-5c56-451f-a83b-d2e794432b47" containerID="2d952e519ba41bdb60267067cfd94ca88342fbfd5a2bfda54f3ae64f2d76b39b" exitCode=0 Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.343757 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerDied","Data":"2d952e519ba41bdb60267067cfd94ca88342fbfd5a2bfda54f3ae64f2d76b39b"} Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.356283 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rkl4"] Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.359850 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rkl4"] Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.457691 5123 scope.go:117] "RemoveContainer" containerID="8837f0b3cc20380deb38e08eda5e23b508a5ad0f56c716bc8dfc0aa2f663c249" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.509116 5123 scope.go:117] "RemoveContainer" containerID="a99beb32dbeae741e5318257f071a1ec2230f26f292b5b0702ed34040521923d" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.535385 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.584408 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql4jg\" (UniqueName: \"kubernetes.io/projected/fb848e09-5c56-451f-a83b-d2e794432b47-kube-api-access-ql4jg\") pod \"fb848e09-5c56-451f-a83b-d2e794432b47\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.584578 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-utilities\") pod \"fb848e09-5c56-451f-a83b-d2e794432b47\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.584623 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-catalog-content\") pod \"fb848e09-5c56-451f-a83b-d2e794432b47\" (UID: \"fb848e09-5c56-451f-a83b-d2e794432b47\") " Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.586315 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-utilities" (OuterVolumeSpecName: "utilities") pod "fb848e09-5c56-451f-a83b-d2e794432b47" (UID: "fb848e09-5c56-451f-a83b-d2e794432b47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.590800 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb848e09-5c56-451f-a83b-d2e794432b47-kube-api-access-ql4jg" (OuterVolumeSpecName: "kube-api-access-ql4jg") pod "fb848e09-5c56-451f-a83b-d2e794432b47" (UID: "fb848e09-5c56-451f-a83b-d2e794432b47"). InnerVolumeSpecName "kube-api-access-ql4jg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.647000 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" path="/var/lib/kubelet/pods/e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e/volumes" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.686437 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ql4jg\" (UniqueName: \"kubernetes.io/projected/fb848e09-5c56-451f-a83b-d2e794432b47-kube-api-access-ql4jg\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.686489 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:23 crc kubenswrapper[5123]: I1212 15:23:23.996574 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" (UID: "c772c7c7-2e1a-46a6-9b7d-e07aa2522d56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.017717 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb848e09-5c56-451f-a83b-d2e794432b47" (UID: "fb848e09-5c56-451f-a83b-d2e794432b47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.092987 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.093028 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb848e09-5c56-451f-a83b-d2e794432b47-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.139560 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4cwn"] Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.145659 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d4cwn"] Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.357273 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqdb6" event={"ID":"fb848e09-5c56-451f-a83b-d2e794432b47","Type":"ContainerDied","Data":"57e35175250b638d7ffb6699dc5d86241d085fa475cd04fa14368ed90243bcca"} Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.357344 5123 scope.go:117] "RemoveContainer" containerID="2d952e519ba41bdb60267067cfd94ca88342fbfd5a2bfda54f3ae64f2d76b39b" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.357490 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqdb6" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.384190 5123 scope.go:117] "RemoveContainer" containerID="a908cc8c6181698a68068ba9c81b79d7aebb3baa0de66d0fd4bb4f3b2d783250" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.390153 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqdb6"] Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.394309 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rqdb6"] Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.414635 5123 scope.go:117] "RemoveContainer" containerID="9873a93e06932c15ba7fcee0b9942311dee2e9920c3ff8c6411ec103090044c0" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.566781 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:23:24 crc kubenswrapper[5123]: I1212 15:23:24.617320 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:23:25 crc kubenswrapper[5123]: I1212 15:23:25.659549 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" path="/var/lib/kubelet/pods/c772c7c7-2e1a-46a6-9b7d-e07aa2522d56/volumes" Dec 12 15:23:25 crc kubenswrapper[5123]: I1212 15:23:25.660704 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" path="/var/lib/kubelet/pods/fb848e09-5c56-451f-a83b-d2e794432b47/volumes" Dec 12 15:23:26 crc kubenswrapper[5123]: I1212 15:23:26.295413 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:23:26 crc kubenswrapper[5123]: I1212 15:23:26.295479 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:23:26 crc kubenswrapper[5123]: I1212 15:23:26.337170 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:23:26 crc kubenswrapper[5123]: I1212 15:23:26.423540 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:23:27 crc kubenswrapper[5123]: I1212 15:23:27.800017 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:23:27 crc kubenswrapper[5123]: I1212 15:23:27.842152 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.244654 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.245374 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.245458 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.246442 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"1cb7ed849b0b323ebbde3ae601d9377223b207f869b11bb3be930864ab7d45cb"} pod="openshift-console/downloads-747b44746d-xhd9t" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.246488 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" containerID="cri-o://1cb7ed849b0b323ebbde3ae601d9377223b207f869b11bb3be930864ab7d45cb" gracePeriod=2 Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.247837 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.247874 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:29 crc kubenswrapper[5123]: I1212 15:23:29.968947 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:23:30 crc kubenswrapper[5123]: I1212 15:23:30.531060 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbdrq"] Dec 12 15:23:30 crc kubenswrapper[5123]: I1212 15:23:30.532317 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbdrq" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="registry-server" containerID="cri-o://8718bebb6bf009e7a8b5fa121fb7e6e2a87a73f30ad42a88445f4a9921822805" gracePeriod=2 Dec 12 15:23:30 crc kubenswrapper[5123]: I1212 15:23:30.902993 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:23:30 crc kubenswrapper[5123]: I1212 15:23:30.903104 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:23:31 crc kubenswrapper[5123]: I1212 15:23:31.412348 5123 generic.go:358] "Generic (PLEG): container finished" podID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerID="8718bebb6bf009e7a8b5fa121fb7e6e2a87a73f30ad42a88445f4a9921822805" exitCode=0 Dec 12 15:23:31 crc kubenswrapper[5123]: I1212 15:23:31.412435 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbdrq" event={"ID":"a077f03f-9a73-4019-912b-e2ebdf5308a5","Type":"ContainerDied","Data":"8718bebb6bf009e7a8b5fa121fb7e6e2a87a73f30ad42a88445f4a9921822805"} Dec 12 15:23:31 crc kubenswrapper[5123]: I1212 15:23:31.416129 5123 generic.go:358] "Generic (PLEG): container finished" podID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerID="1cb7ed849b0b323ebbde3ae601d9377223b207f869b11bb3be930864ab7d45cb" exitCode=0 Dec 12 15:23:31 crc kubenswrapper[5123]: I1212 15:23:31.416195 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerDied","Data":"1cb7ed849b0b323ebbde3ae601d9377223b207f869b11bb3be930864ab7d45cb"} Dec 12 15:23:31 crc kubenswrapper[5123]: I1212 15:23:31.416250 5123 scope.go:117] "RemoveContainer" containerID="8c5617ac20c35a29d82dc82e0862bfeca794ccb2aa8a303b5c55da1094a7bf3b" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.278719 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.395150 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-catalog-content\") pod \"a077f03f-9a73-4019-912b-e2ebdf5308a5\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.395392 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkd85\" (UniqueName: \"kubernetes.io/projected/a077f03f-9a73-4019-912b-e2ebdf5308a5-kube-api-access-bkd85\") pod \"a077f03f-9a73-4019-912b-e2ebdf5308a5\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.395427 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-utilities\") pod \"a077f03f-9a73-4019-912b-e2ebdf5308a5\" (UID: \"a077f03f-9a73-4019-912b-e2ebdf5308a5\") " Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.397035 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-utilities" (OuterVolumeSpecName: "utilities") pod "a077f03f-9a73-4019-912b-e2ebdf5308a5" (UID: "a077f03f-9a73-4019-912b-e2ebdf5308a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.406458 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a077f03f-9a73-4019-912b-e2ebdf5308a5-kube-api-access-bkd85" (OuterVolumeSpecName: "kube-api-access-bkd85") pod "a077f03f-9a73-4019-912b-e2ebdf5308a5" (UID: "a077f03f-9a73-4019-912b-e2ebdf5308a5"). InnerVolumeSpecName "kube-api-access-bkd85". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.426174 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbdrq" event={"ID":"a077f03f-9a73-4019-912b-e2ebdf5308a5","Type":"ContainerDied","Data":"c4edcf1dd0128f0a8def4d9b1e0f3faa94c554c18dbc4517cf0bd8202b55c09d"} Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.426240 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbdrq" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.497327 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bkd85\" (UniqueName: \"kubernetes.io/projected/a077f03f-9a73-4019-912b-e2ebdf5308a5-kube-api-access-bkd85\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.497471 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.499190 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a077f03f-9a73-4019-912b-e2ebdf5308a5" (UID: "a077f03f-9a73-4019-912b-e2ebdf5308a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.599193 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a077f03f-9a73-4019-912b-e2ebdf5308a5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.671295 5123 scope.go:117] "RemoveContainer" containerID="8718bebb6bf009e7a8b5fa121fb7e6e2a87a73f30ad42a88445f4a9921822805" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.689408 5123 scope.go:117] "RemoveContainer" containerID="88b3d44364f7f615e45d147078aee3317b0b602004fbb28493f7947bfc434284" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.716253 5123 scope.go:117] "RemoveContainer" containerID="37117c5d3fd92e669520d96463181bb4124525ab951b1ea7731721953cfb212b" Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.784994 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbdrq"] Dec 12 15:23:32 crc kubenswrapper[5123]: I1212 15:23:32.792624 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbdrq"] Dec 12 15:23:33 crc kubenswrapper[5123]: I1212 15:23:33.557391 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-xhd9t" event={"ID":"09107a60-87da-4e17-9cc0-6dce06396ab6","Type":"ContainerStarted","Data":"bcd38dc7045f19bd0eb5520154fa7d899ca13d7c28b45554480f10e43e4a821e"} Dec 12 15:23:33 crc kubenswrapper[5123]: I1212 15:23:33.558182 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:23:33 crc kubenswrapper[5123]: I1212 15:23:33.558332 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:33 crc kubenswrapper[5123]: I1212 15:23:33.558402 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:33 crc kubenswrapper[5123]: I1212 15:23:33.650129 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" path="/var/lib/kubelet/pods/a077f03f-9a73-4019-912b-e2ebdf5308a5/volumes" Dec 12 15:23:34 crc kubenswrapper[5123]: I1212 15:23:34.567385 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:34 crc kubenswrapper[5123]: I1212 15:23:34.567782 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:39 crc kubenswrapper[5123]: I1212 15:23:39.243155 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:39 crc kubenswrapper[5123]: I1212 15:23:39.244164 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:42 crc kubenswrapper[5123]: I1212 15:23:42.563097 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-cqp44"] Dec 12 15:23:43 crc kubenswrapper[5123]: I1212 15:23:43.729526 5123 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cnq9c container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:23:43 crc kubenswrapper[5123]: I1212 15:23:43.730922 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-68cf44c8b8-cnq9c" podUID="dd669a9c-af5d-4084-bda4-81a455d4c281" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 15:23:44 crc kubenswrapper[5123]: I1212 15:23:44.567761 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:44 crc kubenswrapper[5123]: I1212 15:23:44.567864 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.958594 5123 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.959924 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc" gracePeriod=15 Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.959987 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83" gracePeriod=15 Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.960013 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9" gracePeriod=15 Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.960070 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301" gracePeriod=15 Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.960156 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13" gracePeriod=15 Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.960632 5123 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961694 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961725 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961741 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961753 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961761 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961769 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961785 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961793 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961804 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961811 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961823 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961831 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961840 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961846 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961860 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961866 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961876 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fe093c7-c5cf-4e61-be4d-5c44545546d7" containerName="pruner" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961890 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe093c7-c5cf-4e61-be4d-5c44545546d7" containerName="pruner" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961909 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961923 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961933 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961940 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961950 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961957 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961971 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961977 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961983 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.961990 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962001 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962009 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962031 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962038 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="extract-content" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962057 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962071 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962084 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962092 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962101 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962120 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="extract-utilities" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962134 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962141 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962152 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962158 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962322 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962357 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962366 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="a077f03f-9a73-4019-912b-e2ebdf5308a5" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962377 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="c772c7c7-2e1a-46a6-9b7d-e07aa2522d56" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962386 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962395 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="e501d3fb-0bf6-4f90-bafb-521b5f6c8b9e" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962404 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962411 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb848e09-5c56-451f-a83b-d2e794432b47" containerName="registry-server" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962420 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962431 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fe093c7-c5cf-4e61-be4d-5c44545546d7" containerName="pruner" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962442 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962451 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962619 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962631 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962645 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962664 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962862 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.962894 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.975281 5123 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.984681 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:48 crc kubenswrapper[5123]: I1212 15:23:48.989876 5123 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.028937 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.028996 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029037 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029089 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029153 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029197 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029277 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029302 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029332 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.029544 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.031091 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131334 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131409 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131428 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131459 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131495 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131549 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131599 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131641 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131658 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131648 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131689 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131654 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131704 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131648 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131770 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131798 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.131960 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.132010 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.132746 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.132827 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.244602 5123 patch_prober.go:28] interesting pod/downloads-747b44746d-xhd9t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.245022 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-xhd9t" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.834854 5123 generic.go:358] "Generic (PLEG): container finished" podID="688fe19e-6c1b-42c8-8245-da6b56af433f" containerID="b6659eb7ba57bf7fabad5d3481ec97cd40ed8cf64c70ce559e7474327d5a709f" exitCode=0 Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.834965 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"688fe19e-6c1b-42c8-8245-da6b56af433f","Type":"ContainerDied","Data":"b6659eb7ba57bf7fabad5d3481ec97cd40ed8cf64c70ce559e7474327d5a709f"} Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.835929 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.838858 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.840774 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.843034 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13" exitCode=0 Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.843055 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301" exitCode=0 Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.843061 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9" exitCode=2 Dec 12 15:23:49 crc kubenswrapper[5123]: I1212 15:23:49.843138 5123 scope.go:117] "RemoveContainer" containerID="4f213fed9087642e2d266cffcd6b09d79db89357a2e593aab2f1f5f5de1625db" Dec 12 15:23:50 crc kubenswrapper[5123]: I1212 15:23:50.853271 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.476883 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.478856 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.539245 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688fe19e-6c1b-42c8-8245-da6b56af433f-kube-api-access\") pod \"688fe19e-6c1b-42c8-8245-da6b56af433f\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.539344 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-kubelet-dir\") pod \"688fe19e-6c1b-42c8-8245-da6b56af433f\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.539467 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-var-lock\") pod \"688fe19e-6c1b-42c8-8245-da6b56af433f\" (UID: \"688fe19e-6c1b-42c8-8245-da6b56af433f\") " Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.539550 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "688fe19e-6c1b-42c8-8245-da6b56af433f" (UID: "688fe19e-6c1b-42c8-8245-da6b56af433f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.539704 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-var-lock" (OuterVolumeSpecName: "var-lock") pod "688fe19e-6c1b-42c8-8245-da6b56af433f" (UID: "688fe19e-6c1b-42c8-8245-da6b56af433f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.540111 5123 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.540137 5123 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/688fe19e-6c1b-42c8-8245-da6b56af433f-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.548584 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688fe19e-6c1b-42c8-8245-da6b56af433f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "688fe19e-6c1b-42c8-8245-da6b56af433f" (UID: "688fe19e-6c1b-42c8-8245-da6b56af433f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.641331 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688fe19e-6c1b-42c8-8245-da6b56af433f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.644886 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.863850 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"688fe19e-6c1b-42c8-8245-da6b56af433f","Type":"ContainerDied","Data":"7378fde10932bed8068b6b45ea17ba4b6b24565a4520b141df436583ec8aab77"} Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.863921 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7378fde10932bed8068b6b45ea17ba4b6b24565a4520b141df436583ec8aab77" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.863933 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.867431 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.868688 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:51 crc kubenswrapper[5123]: I1212 15:23:51.868732 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc" exitCode=0 Dec 12 15:23:54 crc kubenswrapper[5123]: E1212 15:23:54.034944 5123 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:54 crc kubenswrapper[5123]: I1212 15:23:54.036284 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:54 crc kubenswrapper[5123]: E1212 15:23:54.071846 5123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880812153e8bb26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:23:54.070899494 +0000 UTC m=+262.880852005,LastTimestamp:2025-12-12 15:23:54.070899494 +0000 UTC m=+262.880852005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:23:54 crc kubenswrapper[5123]: I1212 15:23:54.583374 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-xhd9t" Dec 12 15:23:54 crc kubenswrapper[5123]: I1212 15:23:54.584344 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:54 crc kubenswrapper[5123]: I1212 15:23:54.584746 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:54 crc kubenswrapper[5123]: I1212 15:23:54.887512 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"fb01ee586fca3c9b11b49556b6ec62c91e67b651383dc7405484472289be5882"} Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.356349 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.358454 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.361483 5123 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.361955 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.362324 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.495783 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.495898 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.495928 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.496001 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.496105 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.496437 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.496472 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.497083 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.497621 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.498857 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.597425 5123 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.597479 5123 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.597494 5123 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.597513 5123 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.597530 5123 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.649904 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.897524 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d"} Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.897855 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:55 crc kubenswrapper[5123]: E1212 15:23:55.898701 5123 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.899568 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.900687 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.902016 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.903237 5123 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83" exitCode=0 Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.903394 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.903414 5123 scope.go:117] "RemoveContainer" containerID="5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.904737 5123 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.905541 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.906372 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.908382 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.908932 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.909496 5123 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.920412 5123 scope.go:117] "RemoveContainer" containerID="d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.938585 5123 scope.go:117] "RemoveContainer" containerID="34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.957367 5123 scope.go:117] "RemoveContainer" containerID="b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.977356 5123 scope.go:117] "RemoveContainer" containerID="6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc" Dec 12 15:23:55 crc kubenswrapper[5123]: I1212 15:23:55.996244 5123 scope.go:117] "RemoveContainer" containerID="40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.082657 5123 scope.go:117] "RemoveContainer" containerID="5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13" Dec 12 15:23:56 crc kubenswrapper[5123]: E1212 15:23:56.083557 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13\": container with ID starting with 5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13 not found: ID does not exist" containerID="5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.083631 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13"} err="failed to get container status \"5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13\": rpc error: code = NotFound desc = could not find container \"5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13\": container with ID starting with 5895dc0f3ce18a4637c2277717d6ad97d812bee9fefe694b5572bdcc78ae7e13 not found: ID does not exist" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.083667 5123 scope.go:117] "RemoveContainer" containerID="d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301" Dec 12 15:23:56 crc kubenswrapper[5123]: E1212 15:23:56.084191 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301\": container with ID starting with d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301 not found: ID does not exist" containerID="d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.084239 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301"} err="failed to get container status \"d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301\": rpc error: code = NotFound desc = could not find container \"d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301\": container with ID starting with d331b50c6c609096973278d778919c9c6ac4e46695aa2e4779ca6f4805332301 not found: ID does not exist" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.084263 5123 scope.go:117] "RemoveContainer" containerID="34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83" Dec 12 15:23:56 crc kubenswrapper[5123]: E1212 15:23:56.084686 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83\": container with ID starting with 34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83 not found: ID does not exist" containerID="34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.084710 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83"} err="failed to get container status \"34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83\": rpc error: code = NotFound desc = could not find container \"34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83\": container with ID starting with 34bae6bd30c1db17488802318dfdb214ad97b12fba2bd2724522387be66bed83 not found: ID does not exist" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.084726 5123 scope.go:117] "RemoveContainer" containerID="b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9" Dec 12 15:23:56 crc kubenswrapper[5123]: E1212 15:23:56.084999 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9\": container with ID starting with b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9 not found: ID does not exist" containerID="b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.085028 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9"} err="failed to get container status \"b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9\": rpc error: code = NotFound desc = could not find container \"b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9\": container with ID starting with b9095a46d0255140f02bb4949f61fc5120a0d62ccb27ed3e9cb8ce5f430498d9 not found: ID does not exist" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.085044 5123 scope.go:117] "RemoveContainer" containerID="6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc" Dec 12 15:23:56 crc kubenswrapper[5123]: E1212 15:23:56.085474 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc\": container with ID starting with 6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc not found: ID does not exist" containerID="6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.085503 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc"} err="failed to get container status \"6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc\": rpc error: code = NotFound desc = could not find container \"6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc\": container with ID starting with 6e07f3e2617540c10ad02b1eb35775776e35852dc555f67a34e81beeab3e64fc not found: ID does not exist" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.085517 5123 scope.go:117] "RemoveContainer" containerID="40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6" Dec 12 15:23:56 crc kubenswrapper[5123]: E1212 15:23:56.085731 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\": container with ID starting with 40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6 not found: ID does not exist" containerID="40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.085759 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6"} err="failed to get container status \"40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\": rpc error: code = NotFound desc = could not find container \"40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6\": container with ID starting with 40beb854e478d0f51e4640f477258c6ef53c632e71391168c8c052a47bc2a0c6 not found: ID does not exist" Dec 12 15:23:56 crc kubenswrapper[5123]: I1212 15:23:56.910736 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:56 crc kubenswrapper[5123]: E1212 15:23:56.911310 5123 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:23:58 crc kubenswrapper[5123]: E1212 15:23:58.933601 5123 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:58 crc kubenswrapper[5123]: E1212 15:23:58.934169 5123 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:58 crc kubenswrapper[5123]: E1212 15:23:58.934454 5123 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:58 crc kubenswrapper[5123]: E1212 15:23:58.934830 5123 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:58 crc kubenswrapper[5123]: E1212 15:23:58.935480 5123 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:23:58 crc kubenswrapper[5123]: I1212 15:23:58.935551 5123 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 15:23:58 crc kubenswrapper[5123]: E1212 15:23:58.936089 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Dec 12 15:23:59 crc kubenswrapper[5123]: E1212 15:23:59.137107 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Dec 12 15:23:59 crc kubenswrapper[5123]: E1212 15:23:59.605820 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Dec 12 15:24:00 crc kubenswrapper[5123]: E1212 15:24:00.407168 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Dec 12 15:24:00 crc kubenswrapper[5123]: I1212 15:24:00.902486 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:24:00 crc kubenswrapper[5123]: I1212 15:24:00.902641 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:24:01 crc kubenswrapper[5123]: E1212 15:24:01.071260 5123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880812153e8bb26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:23:54.070899494 +0000 UTC m=+262.880852005,LastTimestamp:2025-12-12 15:23:54.070899494 +0000 UTC m=+262.880852005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:24:01 crc kubenswrapper[5123]: I1212 15:24:01.646672 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:01 crc kubenswrapper[5123]: I1212 15:24:01.648113 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:02 crc kubenswrapper[5123]: E1212 15:24:02.009066 5123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.639435 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.640738 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.641256 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.663738 5123 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.664678 5123 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:02 crc kubenswrapper[5123]: E1212 15:24:02.665605 5123 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.666100 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.957189 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"150b46e9bca5a4b16c2654ecb82f2a7711730596ad8a3699cf4e4aa577e8cf30"} Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.961963 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.962036 5123 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="90a36cde8f0155fd7e784fe62e8b6855d9e6067713b30d29b277dd7bc9506b03" exitCode=1 Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.962159 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"90a36cde8f0155fd7e784fe62e8b6855d9e6067713b30d29b277dd7bc9506b03"} Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.963377 5123 scope.go:117] "RemoveContainer" containerID="90a36cde8f0155fd7e784fe62e8b6855d9e6067713b30d29b277dd7bc9506b03" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.963806 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.964293 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:02 crc kubenswrapper[5123]: I1212 15:24:02.964553 5123 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.848036 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.975744 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.976207 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"22aa6072aba1750d66bb0a5d4051c955dedeced26d406e899a590be1fa71a0ea"} Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.977968 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.978391 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.978651 5123 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="7b11e0b3081bd29d4a4b55668baa694189b1f2c760dd711f167b14b97ca33827" exitCode=0 Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.978740 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"7b11e0b3081bd29d4a4b55668baa694189b1f2c760dd711f167b14b97ca33827"} Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.978999 5123 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.979291 5123 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.979337 5123 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:03 crc kubenswrapper[5123]: E1212 15:24:03.979747 5123 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.979755 5123 status_manager.go:895] "Failed to get status for pod" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.980154 5123 status_manager.go:895] "Failed to get status for pod" podUID="09107a60-87da-4e17-9cc0-6dce06396ab6" pod="openshift-console/downloads-747b44746d-xhd9t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-xhd9t\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:03 crc kubenswrapper[5123]: I1212 15:24:03.980447 5123 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Dec 12 15:24:04 crc kubenswrapper[5123]: I1212 15:24:04.990726 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c55376d95063fc290b01bb3068dd343e4cf1b0648d984a6a1621b07408427405"} Dec 12 15:24:04 crc kubenswrapper[5123]: I1212 15:24:04.991365 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"15595dd25f770fecd8395a0776e57c1627fa655b2811b1d5d1b34d3d93e668d9"} Dec 12 15:24:06 crc kubenswrapper[5123]: I1212 15:24:06.016403 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"23bd2abcc09f0af5ce6aa7140f7c30c51877533b445c4de2434a8bf2db48c24f"} Dec 12 15:24:06 crc kubenswrapper[5123]: I1212 15:24:06.016859 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6e5dcecbf3a2736afeca3db26ca893a6b0ad3b557832196402070876f0f72159"} Dec 12 15:24:06 crc kubenswrapper[5123]: I1212 15:24:06.016878 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6ab7dd9915607d62011bc5f3a4a11c2232829ee764ea12a3115072c44b390967"} Dec 12 15:24:06 crc kubenswrapper[5123]: I1212 15:24:06.017422 5123 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:06 crc kubenswrapper[5123]: I1212 15:24:06.017447 5123 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:06 crc kubenswrapper[5123]: I1212 15:24:06.017929 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:07 crc kubenswrapper[5123]: I1212 15:24:07.603610 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" podUID="c4465de2-5e85-451d-a998-dcff71c6d37c" containerName="oauth-openshift" containerID="cri-o://3a8e1ad4787b4dbc70707975a2240d26e7c4aa17123bfc16f3743df7363f2c36" gracePeriod=15 Dec 12 15:24:07 crc kubenswrapper[5123]: I1212 15:24:07.666917 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:07 crc kubenswrapper[5123]: I1212 15:24:07.667373 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:07 crc kubenswrapper[5123]: I1212 15:24:07.678277 5123 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]log ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]etcd ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-filter ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-apiextensions-informers ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-apiextensions-controllers ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/crd-informer-synced ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-system-namespaces-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 12 15:24:07 crc kubenswrapper[5123]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/bootstrap-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/start-kube-aggregator-informers ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/apiservice-registration-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/apiservice-discovery-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]autoregister-completion ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/apiservice-openapi-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 12 15:24:07 crc kubenswrapper[5123]: livez check failed Dec 12 15:24:07 crc kubenswrapper[5123]: I1212 15:24:07.678397 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57755cc5f99000cc11e193051474d4e2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.038944 5123 generic.go:358] "Generic (PLEG): container finished" podID="c4465de2-5e85-451d-a998-dcff71c6d37c" containerID="3a8e1ad4787b4dbc70707975a2240d26e7c4aa17123bfc16f3743df7363f2c36" exitCode=0 Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.039331 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" event={"ID":"c4465de2-5e85-451d-a998-dcff71c6d37c","Type":"ContainerDied","Data":"3a8e1ad4787b4dbc70707975a2240d26e7c4aa17123bfc16f3743df7363f2c36"} Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.072757 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.109704 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-router-certs\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.109797 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-serving-cert\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.109842 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-service-ca\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.109867 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-error\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.109911 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-policies\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.109938 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-dir\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.109985 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-trusted-ca-bundle\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110305 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110360 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-cliconfig\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110392 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-idp-0-file-data\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110414 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-login\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110434 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6htnd\" (UniqueName: \"kubernetes.io/projected/c4465de2-5e85-451d-a998-dcff71c6d37c-kube-api-access-6htnd\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110475 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-session\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110493 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-provider-selection\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110521 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-ocp-branding-template\") pod \"c4465de2-5e85-451d-a998-dcff71c6d37c\" (UID: \"c4465de2-5e85-451d-a998-dcff71c6d37c\") " Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.110777 5123 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.111327 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.112039 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.112263 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.112752 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.118589 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.119180 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.119623 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.121107 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.121264 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4465de2-5e85-451d-a998-dcff71c6d37c-kube-api-access-6htnd" (OuterVolumeSpecName: "kube-api-access-6htnd") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "kube-api-access-6htnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.121725 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.121947 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.122984 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.125712 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c4465de2-5e85-451d-a998-dcff71c6d37c" (UID: "c4465de2-5e85-451d-a998-dcff71c6d37c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.211729 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.212481 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.212569 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.212669 5123 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.212768 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.212864 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.212955 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.213113 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.213262 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6htnd\" (UniqueName: \"kubernetes.io/projected/c4465de2-5e85-451d-a998-dcff71c6d37c-kube-api-access-6htnd\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.213369 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.213475 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.213592 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.213709 5123 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c4465de2-5e85-451d-a998-dcff71c6d37c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.334909 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:24:08 crc kubenswrapper[5123]: I1212 15:24:08.344255 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:24:09 crc kubenswrapper[5123]: I1212 15:24:09.001816 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:24:09 crc kubenswrapper[5123]: I1212 15:24:09.051646 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" Dec 12 15:24:09 crc kubenswrapper[5123]: I1212 15:24:09.052042 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-cqp44" event={"ID":"c4465de2-5e85-451d-a998-dcff71c6d37c","Type":"ContainerDied","Data":"41191cb8b32cf2147eba77a5a97493110dfdafc3d28f1fa4b134483b033f8101"} Dec 12 15:24:09 crc kubenswrapper[5123]: I1212 15:24:09.052134 5123 scope.go:117] "RemoveContainer" containerID="3a8e1ad4787b4dbc70707975a2240d26e7c4aa17123bfc16f3743df7363f2c36" Dec 12 15:24:11 crc kubenswrapper[5123]: I1212 15:24:11.628831 5123 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:11 crc kubenswrapper[5123]: I1212 15:24:11.629286 5123 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:11 crc kubenswrapper[5123]: I1212 15:24:11.872145 5123 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="4542b1d8-24e0-48df-b6d9-b94c75f343b5" Dec 12 15:24:12 crc kubenswrapper[5123]: I1212 15:24:12.201015 5123 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:12 crc kubenswrapper[5123]: I1212 15:24:12.201072 5123 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:12 crc kubenswrapper[5123]: I1212 15:24:12.217512 5123 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="4542b1d8-24e0-48df-b6d9-b94c75f343b5" Dec 12 15:24:12 crc kubenswrapper[5123]: E1212 15:24:12.527027 5123 reflector.go:200] "Failed to watch" err="configmaps \"v4-0-config-system-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" type="*v1.ConfigMap" Dec 12 15:24:12 crc kubenswrapper[5123]: E1212 15:24:12.995409 5123 reflector.go:200] "Failed to watch" err="secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" type="*v1.Secret" Dec 12 15:24:20 crc kubenswrapper[5123]: I1212 15:24:20.067887 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:24:21 crc kubenswrapper[5123]: I1212 15:24:21.438472 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 15:24:21 crc kubenswrapper[5123]: I1212 15:24:21.939865 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 15:24:22 crc kubenswrapper[5123]: I1212 15:24:22.190861 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:22 crc kubenswrapper[5123]: I1212 15:24:22.549622 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 15:24:22 crc kubenswrapper[5123]: I1212 15:24:22.681111 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:24:22 crc kubenswrapper[5123]: I1212 15:24:22.782356 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 15:24:22 crc kubenswrapper[5123]: I1212 15:24:22.892118 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.041599 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.347084 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.435196 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.710752 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.718516 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.815489 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.854884 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 15:24:23 crc kubenswrapper[5123]: I1212 15:24:23.991785 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.038151 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.185781 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.370937 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.397808 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.452087 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.491828 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.502346 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.521164 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.544044 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.598853 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.736820 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.764502 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.813379 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.852081 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.854907 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 15:24:24 crc kubenswrapper[5123]: I1212 15:24:24.983489 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.067143 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.080368 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.120007 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.264753 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.548778 5123 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.549048 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.550481 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.550698 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.551417 5123 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.551774 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.577045 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.611626 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.628849 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.663157 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.761577 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.818708 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 15:24:25 crc kubenswrapper[5123]: I1212 15:24:25.992203 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.095885 5123 ???:1] "http: TLS handshake error from 192.168.126.11:37396: no serving certificate available for the kubelet" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.190101 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.208098 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.356430 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.356736 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.364745 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.387586 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.425837 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.462066 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.467588 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.559727 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.646735 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.899034 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.937410 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 15:24:26 crc kubenswrapper[5123]: I1212 15:24:26.939574 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.014589 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.081171 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.081728 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.153474 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.166681 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.389595 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.389713 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.389872 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.389979 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.501753 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.581270 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.586448 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.606787 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.704432 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.841728 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.918372 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 15:24:27 crc kubenswrapper[5123]: I1212 15:24:27.947236 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.020330 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.081607 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.209940 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.334406 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.334551 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.456567 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.459549 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.568203 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.589004 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.620395 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.673871 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.713722 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.823303 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.884652 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 15:24:28 crc kubenswrapper[5123]: I1212 15:24:28.930394 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.001126 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.008998 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.009182 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.011384 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.026840 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.072858 5123 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.088341 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.105496 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.167894 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.253095 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.374231 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.398124 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.427208 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.432731 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.453867 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.546991 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.557281 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.593187 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.618932 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.653615 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.675455 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.681439 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.769993 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.841692 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:29 crc kubenswrapper[5123]: I1212 15:24:29.889901 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.068730 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.136156 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.283946 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.319662 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.393748 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.393883 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.421283 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.425653 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.508917 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.555448 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.590535 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.606917 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.623184 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.642713 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.670534 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.694370 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.867587 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.902155 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.902286 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.902380 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.903204 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65dc049b4db90d3b590a91a0ba963ce193c4d376d4171d75ddda499d4ad620ff"} pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:24:30 crc kubenswrapper[5123]: I1212 15:24:30.903295 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" containerID="cri-o://65dc049b4db90d3b590a91a0ba963ce193c4d376d4171d75ddda499d4ad620ff" gracePeriod=600 Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.014432 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.114646 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.133103 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.180961 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.292571 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.516021 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.527285 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.740682 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.789972 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.790017 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.824325 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.824882 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.876920 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.888838 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.892558 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 15:24:31 crc kubenswrapper[5123]: I1212 15:24:31.958903 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.025745 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.053624 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.109151 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.130089 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.145801 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.233967 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.299576 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.302614 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.370635 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.381030 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.428585 5123 generic.go:358] "Generic (PLEG): container finished" podID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerID="65dc049b4db90d3b590a91a0ba963ce193c4d376d4171d75ddda499d4ad620ff" exitCode=0 Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.428701 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerDied","Data":"65dc049b4db90d3b590a91a0ba963ce193c4d376d4171d75ddda499d4ad620ff"} Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.453779 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.481263 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.493543 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.506657 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.549532 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.634980 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.646195 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.701001 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.788607 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.788844 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.838740 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.855998 5123 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:24:32 crc kubenswrapper[5123]: I1212 15:24:32.908233 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.016091 5123 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.117499 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.131746 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.135276 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.143431 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.226171 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.322260 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.424203 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.442560 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"9a4b170656df051882c89f0434d221bcac3b53456e6fd91756cfb74e868ebd7d"} Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.692635 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.749295 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.839244 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.858632 5123 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.864081 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-cqp44","openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.864675 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-68557fff5c-6jtzs"] Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865300 5123 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865327 5123 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="96b4a286-31bb-42a1-934a-56ea0da8024a" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865335 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" containerName="installer" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865356 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" containerName="installer" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865371 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4465de2-5e85-451d-a998-dcff71c6d37c" containerName="oauth-openshift" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865377 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4465de2-5e85-451d-a998-dcff71c6d37c" containerName="oauth-openshift" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865480 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="c4465de2-5e85-451d-a998-dcff71c6d37c" containerName="oauth-openshift" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.865491 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="688fe19e-6c1b-42c8-8245-da6b56af433f" containerName="installer" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.878170 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.881057 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.881063 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.881770 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.885774 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.885986 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.886038 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.885997 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.886308 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.886417 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.888467 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.888529 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.888667 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.888684 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.888732 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.898662 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.905157 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.927736 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.963163 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.963136062 podStartE2EDuration="22.963136062s" podCreationTimestamp="2025-12-12 15:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:33.939161717 +0000 UTC m=+302.749114238" watchObservedRunningTime="2025-12-12 15:24:33.963136062 +0000 UTC m=+302.773088573" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.974172 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.974619 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.992473 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-router-certs\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.992525 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-service-ca\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.992563 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-audit-policies\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.992607 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.992657 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-session\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.992689 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.992953 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-login\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.993156 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.993265 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-error\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.993358 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.993444 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.993616 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4cj2\" (UniqueName: \"kubernetes.io/projected/d189b513-79ff-4e77-9d42-76c11d5c5d84-kube-api-access-k4cj2\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.993750 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d189b513-79ff-4e77-9d42-76c11d5c5d84-audit-dir\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:33 crc kubenswrapper[5123]: I1212 15:24:33.993919 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.095917 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-router-certs\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.095997 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-service-ca\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096032 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-audit-policies\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096065 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096114 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-session\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096142 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096170 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-login\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096206 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096260 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-error\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096280 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096303 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096347 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4cj2\" (UniqueName: \"kubernetes.io/projected/d189b513-79ff-4e77-9d42-76c11d5c5d84-kube-api-access-k4cj2\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096371 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d189b513-79ff-4e77-9d42-76c11d5c5d84-audit-dir\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.096415 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.097516 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.099727 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-service-ca\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.099935 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.100033 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d189b513-79ff-4e77-9d42-76c11d5c5d84-audit-dir\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.100524 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d189b513-79ff-4e77-9d42-76c11d5c5d84-audit-policies\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.103544 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-error\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.103870 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-login\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.107439 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.107762 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-session\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.108615 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.109024 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.109496 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.116828 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d189b513-79ff-4e77-9d42-76c11d5c5d84-v4-0-config-system-router-certs\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.124706 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4cj2\" (UniqueName: \"kubernetes.io/projected/d189b513-79ff-4e77-9d42-76c11d5c5d84-kube-api-access-k4cj2\") pod \"oauth-openshift-68557fff5c-6jtzs\" (UID: \"d189b513-79ff-4e77-9d42-76c11d5c5d84\") " pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.125088 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.128170 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.149814 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.156183 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.200963 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.484119 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.484622 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.484761 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.485097 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.485614 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.522819 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.525379 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.631833 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.678620 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68557fff5c-6jtzs"] Dec 12 15:24:34 crc kubenswrapper[5123]: W1212 15:24:34.690977 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd189b513_79ff_4e77_9d42_76c11d5c5d84.slice/crio-8486df9ae79d8a8a9e57a935132e8be576e184cd127df3ed6264d01ec884c59f WatchSource:0}: Error finding container 8486df9ae79d8a8a9e57a935132e8be576e184cd127df3ed6264d01ec884c59f: Status 404 returned error can't find the container with id 8486df9ae79d8a8a9e57a935132e8be576e184cd127df3ed6264d01ec884c59f Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.743239 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.930675 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.961085 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 15:24:34 crc kubenswrapper[5123]: I1212 15:24:34.985601 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.021469 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.156123 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.206499 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.219483 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.247547 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.308431 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.374742 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.425089 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.512106 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" event={"ID":"d189b513-79ff-4e77-9d42-76c11d5c5d84","Type":"ContainerStarted","Data":"e997c9f4274be0105ee33df145028120ca188a283943f15ae87ba776cbdf010f"} Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.512195 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" event={"ID":"d189b513-79ff-4e77-9d42-76c11d5c5d84","Type":"ContainerStarted","Data":"8486df9ae79d8a8a9e57a935132e8be576e184cd127df3ed6264d01ec884c59f"} Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.512836 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.517202 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.648924 5123 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.649132 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4465de2-5e85-451d-a998-dcff71c6d37c" path="/var/lib/kubelet/pods/c4465de2-5e85-451d-a998-dcff71c6d37c/volumes" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.696272 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 15:24:35 crc kubenswrapper[5123]: I1212 15:24:35.748105 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.017492 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.045570 5123 ???:1] "http: TLS handshake error from 192.168.126.11:39154: no serving certificate available for the kubelet" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.134441 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.153098 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.442752 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.459096 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.484141 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.486163 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.513049 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68557fff5c-6jtzs" podStartSLOduration=54.513021208 podStartE2EDuration="54.513021208s" podCreationTimestamp="2025-12-12 15:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:35.536513796 +0000 UTC m=+304.346466317" watchObservedRunningTime="2025-12-12 15:24:36.513021208 +0000 UTC m=+305.322973729" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.623367 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.692721 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.695373 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.701385 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.770038 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.850181 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.862791 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 15:24:36 crc kubenswrapper[5123]: I1212 15:24:36.872828 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.070037 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.182860 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.279709 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.405951 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.512338 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.672430 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.677927 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.774501 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 15:24:37 crc kubenswrapper[5123]: I1212 15:24:37.844941 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 15:24:38 crc kubenswrapper[5123]: I1212 15:24:38.988804 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 15:24:43 crc kubenswrapper[5123]: I1212 15:24:43.473985 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d49859f95-pcm7k"] Dec 12 15:24:43 crc kubenswrapper[5123]: I1212 15:24:43.475046 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" podUID="01a8c257-f895-4044-aec0-ea9cb012126e" containerName="controller-manager" containerID="cri-o://3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a" gracePeriod=30 Dec 12 15:24:43 crc kubenswrapper[5123]: I1212 15:24:43.485820 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84"] Dec 12 15:24:43 crc kubenswrapper[5123]: I1212 15:24:43.486538 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" podUID="3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" containerName="route-controller-manager" containerID="cri-o://58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14" gracePeriod=30 Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.342560 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.407911 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.408884 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" containerName="route-controller-manager" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.408907 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" containerName="route-controller-manager" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.409009 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" containerName="route-controller-manager" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.424427 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.424673 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.450538 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.525454 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.526789 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01a8c257-f895-4044-aec0-ea9cb012126e" containerName="controller-manager" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.526824 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a8c257-f895-4044-aec0-ea9cb012126e" containerName="controller-manager" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.527055 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="01a8c257-f895-4044-aec0-ea9cb012126e" containerName="controller-manager" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.533926 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536150 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-tmp\") pod \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536353 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-client-ca\") pod \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536418 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-config\") pod \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536443 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-proxy-ca-bundles\") pod \"01a8c257-f895-4044-aec0-ea9cb012126e\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536468 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01a8c257-f895-4044-aec0-ea9cb012126e-tmp\") pod \"01a8c257-f895-4044-aec0-ea9cb012126e\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536486 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8qp6\" (UniqueName: \"kubernetes.io/projected/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-kube-api-access-g8qp6\") pod \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536530 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-serving-cert\") pod \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\" (UID: \"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536556 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-client-ca\") pod \"01a8c257-f895-4044-aec0-ea9cb012126e\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536616 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm7br\" (UniqueName: \"kubernetes.io/projected/01a8c257-f895-4044-aec0-ea9cb012126e-kube-api-access-tm7br\") pod \"01a8c257-f895-4044-aec0-ea9cb012126e\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536739 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-config\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536858 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-tmp" (OuterVolumeSpecName: "tmp") pod "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" (UID: "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536880 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-client-ca\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.536981 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476b9c-cc9a-4c2b-b789-ec5d59580a87-serving-cert\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537130 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99476b9c-cc9a-4c2b-b789-ec5d59580a87-tmp\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537201 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-config\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537415 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2g6l\" (UniqueName: \"kubernetes.io/projected/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-kube-api-access-q2g6l\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537486 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-tmp\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537528 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-proxy-ca-bundles\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537607 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-serving-cert\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537648 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-client-ca\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537674 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmhl\" (UniqueName: \"kubernetes.io/projected/99476b9c-cc9a-4c2b-b789-ec5d59580a87-kube-api-access-nvmhl\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.537772 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.538374 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01a8c257-f895-4044-aec0-ea9cb012126e" (UID: "01a8c257-f895-4044-aec0-ea9cb012126e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.538454 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" (UID: "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.545599 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-config" (OuterVolumeSpecName: "config") pod "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" (UID: "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.546144 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-client-ca" (OuterVolumeSpecName: "client-ca") pod "01a8c257-f895-4044-aec0-ea9cb012126e" (UID: "01a8c257-f895-4044-aec0-ea9cb012126e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.548417 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a8c257-f895-4044-aec0-ea9cb012126e-tmp" (OuterVolumeSpecName: "tmp") pod "01a8c257-f895-4044-aec0-ea9cb012126e" (UID: "01a8c257-f895-4044-aec0-ea9cb012126e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.552637 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" (UID: "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.552969 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-kube-api-access-g8qp6" (OuterVolumeSpecName: "kube-api-access-g8qp6") pod "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" (UID: "3ef6d6fb-460e-4015-8298-ec1d5a47e5f5"). InnerVolumeSpecName "kube-api-access-g8qp6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.553378 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a8c257-f895-4044-aec0-ea9cb012126e-kube-api-access-tm7br" (OuterVolumeSpecName: "kube-api-access-tm7br") pod "01a8c257-f895-4044-aec0-ea9cb012126e" (UID: "01a8c257-f895-4044-aec0-ea9cb012126e"). InnerVolumeSpecName "kube-api-access-tm7br". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.559234 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.583159 5123 generic.go:358] "Generic (PLEG): container finished" podID="01a8c257-f895-4044-aec0-ea9cb012126e" containerID="3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a" exitCode=0 Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.583370 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" event={"ID":"01a8c257-f895-4044-aec0-ea9cb012126e","Type":"ContainerDied","Data":"3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a"} Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.583463 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" event={"ID":"01a8c257-f895-4044-aec0-ea9cb012126e","Type":"ContainerDied","Data":"387d633388ff76cbb1e462b982d1faacba77ac90a9766935db55a4dbd9c54c86"} Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.583490 5123 scope.go:117] "RemoveContainer" containerID="3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.583707 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d49859f95-pcm7k" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.597153 5123 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.598208 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d" gracePeriod=5 Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.598736 5123 generic.go:358] "Generic (PLEG): container finished" podID="3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" containerID="58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14" exitCode=0 Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.598797 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.598863 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" event={"ID":"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5","Type":"ContainerDied","Data":"58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14"} Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.598938 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84" event={"ID":"3ef6d6fb-460e-4015-8298-ec1d5a47e5f5","Type":"ContainerDied","Data":"37058c615e68f8124c64f7a1e5ffa26d077baed2c8ea9c6246e011c1e2a66551"} Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.605937 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.628485 5123 scope.go:117] "RemoveContainer" containerID="3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a" Dec 12 15:24:44 crc kubenswrapper[5123]: E1212 15:24:44.629960 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a\": container with ID starting with 3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a not found: ID does not exist" containerID="3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.630024 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a"} err="failed to get container status \"3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a\": rpc error: code = NotFound desc = could not find container \"3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a\": container with ID starting with 3ce61d3628184b4371570c7bfee551c47ad928ce8167aefb50b6088777c6202a not found: ID does not exist" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.630051 5123 scope.go:117] "RemoveContainer" containerID="58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.640558 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-config\") pod \"01a8c257-f895-4044-aec0-ea9cb012126e\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.640658 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a8c257-f895-4044-aec0-ea9cb012126e-serving-cert\") pod \"01a8c257-f895-4044-aec0-ea9cb012126e\" (UID: \"01a8c257-f895-4044-aec0-ea9cb012126e\") " Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.640904 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99476b9c-cc9a-4c2b-b789-ec5d59580a87-tmp\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.640955 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-config\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641008 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q2g6l\" (UniqueName: \"kubernetes.io/projected/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-kube-api-access-q2g6l\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641069 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-tmp\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641095 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-proxy-ca-bundles\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641146 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-serving-cert\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641173 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-client-ca\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641197 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmhl\" (UniqueName: \"kubernetes.io/projected/99476b9c-cc9a-4c2b-b789-ec5d59580a87-kube-api-access-nvmhl\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641273 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-config\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641310 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-config" (OuterVolumeSpecName: "config") pod "01a8c257-f895-4044-aec0-ea9cb012126e" (UID: "01a8c257-f895-4044-aec0-ea9cb012126e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.642518 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-client-ca\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.641327 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-client-ca\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.642685 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476b9c-cc9a-4c2b-b789-ec5d59580a87-serving-cert\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.642819 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.642832 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.642854 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.643024 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.643046 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01a8c257-f895-4044-aec0-ea9cb012126e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.643060 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g8qp6\" (UniqueName: \"kubernetes.io/projected/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-kube-api-access-g8qp6\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.643072 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.643083 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01a8c257-f895-4044-aec0-ea9cb012126e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.643095 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tm7br\" (UniqueName: \"kubernetes.io/projected/01a8c257-f895-4044-aec0-ea9cb012126e-kube-api-access-tm7br\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.646782 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.647811 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a8c257-f895-4044-aec0-ea9cb012126e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01a8c257-f895-4044-aec0-ea9cb012126e" (UID: "01a8c257-f895-4044-aec0-ea9cb012126e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.648289 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-proxy-ca-bundles\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.648464 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476b9c-cc9a-4c2b-b789-ec5d59580a87-serving-cert\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.649167 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-tmp\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.650250 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-client-ca\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.650308 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.651112 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99476b9c-cc9a-4c2b-b789-ec5d59580a87-tmp\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.651302 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-config\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.653806 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-config\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.655552 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc9d579c5-4pc84"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.660849 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-serving-cert\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.673246 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2g6l\" (UniqueName: \"kubernetes.io/projected/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-kube-api-access-q2g6l\") pod \"controller-manager-5f9ccc8bd6-jfwtd\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.674352 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmhl\" (UniqueName: \"kubernetes.io/projected/99476b9c-cc9a-4c2b-b789-ec5d59580a87-kube-api-access-nvmhl\") pod \"route-controller-manager-648f5757c8-bzrzr\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.723197 5123 scope.go:117] "RemoveContainer" containerID="58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14" Dec 12 15:24:44 crc kubenswrapper[5123]: E1212 15:24:44.724538 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14\": container with ID starting with 58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14 not found: ID does not exist" containerID="58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.724608 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14"} err="failed to get container status \"58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14\": rpc error: code = NotFound desc = could not find container \"58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14\": container with ID starting with 58cbbb1b00f8f0dd1a7148bd3e3781f3886425883069c8053780639e3ac39e14 not found: ID does not exist" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.744991 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a8c257-f895-4044-aec0-ea9cb012126e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.772400 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.870305 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.913647 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.945918 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d49859f95-pcm7k"] Dec 12 15:24:44 crc kubenswrapper[5123]: I1212 15:24:44.951105 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d49859f95-pcm7k"] Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.026833 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr"] Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.120772 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd"] Dec 12 15:24:45 crc kubenswrapper[5123]: W1212 15:24:45.144337 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf73cc11_a7c1_4d5a_ac3a_b8a35374238e.slice/crio-14d971e1a6ffc83f55c654016240908946e09336a183fa4ac90ecfbce98cf419 WatchSource:0}: Error finding container 14d971e1a6ffc83f55c654016240908946e09336a183fa4ac90ecfbce98cf419: Status 404 returned error can't find the container with id 14d971e1a6ffc83f55c654016240908946e09336a183fa4ac90ecfbce98cf419 Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.607872 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" event={"ID":"df73cc11-a7c1-4d5a-ac3a-b8a35374238e","Type":"ContainerStarted","Data":"2526f52b038f14a479693699eb89f83bd3f93aeb6be60c0ab0255965f3c2c673"} Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.607943 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" event={"ID":"df73cc11-a7c1-4d5a-ac3a-b8a35374238e","Type":"ContainerStarted","Data":"14d971e1a6ffc83f55c654016240908946e09336a183fa4ac90ecfbce98cf419"} Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.610039 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.617345 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" event={"ID":"99476b9c-cc9a-4c2b-b789-ec5d59580a87","Type":"ContainerStarted","Data":"fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe"} Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.617398 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" event={"ID":"99476b9c-cc9a-4c2b-b789-ec5d59580a87","Type":"ContainerStarted","Data":"b4b37f69ab53f1a4ed1a4d5f01209405423a362fc38b93f333cf274b9cb150d3"} Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.618381 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.631341 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" podStartSLOduration=2.631317401 podStartE2EDuration="2.631317401s" podCreationTimestamp="2025-12-12 15:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:45.630943939 +0000 UTC m=+314.440896450" watchObservedRunningTime="2025-12-12 15:24:45.631317401 +0000 UTC m=+314.441269922" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.634067 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.648610 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a8c257-f895-4044-aec0-ea9cb012126e" path="/var/lib/kubelet/pods/01a8c257-f895-4044-aec0-ea9cb012126e/volumes" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.649900 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef6d6fb-460e-4015-8298-ec1d5a47e5f5" path="/var/lib/kubelet/pods/3ef6d6fb-460e-4015-8298-ec1d5a47e5f5/volumes" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.654131 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" podStartSLOduration=2.654100808 podStartE2EDuration="2.654100808s" podCreationTimestamp="2025-12-12 15:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:24:45.653676775 +0000 UTC m=+314.463629306" watchObservedRunningTime="2025-12-12 15:24:45.654100808 +0000 UTC m=+314.464053319" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.655133 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:45 crc kubenswrapper[5123]: I1212 15:24:45.866887 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:24:46 crc kubenswrapper[5123]: I1212 15:24:46.489335 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.204554 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.205201 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.207602 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.220332 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.220485 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.220509 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.220539 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.220957 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.221071 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.221663 5123 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.221685 5123 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.221729 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.233173 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.323092 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.323445 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.323585 5123 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.323605 5123 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.323624 5123 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.715025 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.715095 5123 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d" exitCode=137 Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.715277 5123 scope.go:117] "RemoveContainer" containerID="ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.715522 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.746777 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.756281 5123 scope.go:117] "RemoveContainer" containerID="ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d" Dec 12 15:24:50 crc kubenswrapper[5123]: E1212 15:24:50.757052 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d\": container with ID starting with ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d not found: ID does not exist" containerID="ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d" Dec 12 15:24:50 crc kubenswrapper[5123]: I1212 15:24:50.757093 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d"} err="failed to get container status \"ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d\": rpc error: code = NotFound desc = could not find container \"ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d\": container with ID starting with ec33bbd478f147a0ec76b35aa7b3e74bbf17d650016c0bdc01211bfc29ea828d not found: ID does not exist" Dec 12 15:24:51 crc kubenswrapper[5123]: I1212 15:24:51.648311 5123 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 15:24:51 crc kubenswrapper[5123]: I1212 15:24:51.651246 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 12 15:24:56 crc kubenswrapper[5123]: I1212 15:24:56.423721 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 15:24:59 crc kubenswrapper[5123]: I1212 15:24:59.502659 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 15:25:01 crc kubenswrapper[5123]: I1212 15:25:01.467794 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 15:25:03 crc kubenswrapper[5123]: I1212 15:25:03.732419 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd"] Dec 12 15:25:03 crc kubenswrapper[5123]: I1212 15:25:03.740881 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" podUID="df73cc11-a7c1-4d5a-ac3a-b8a35374238e" containerName="controller-manager" containerID="cri-o://2526f52b038f14a479693699eb89f83bd3f93aeb6be60c0ab0255965f3c2c673" gracePeriod=30 Dec 12 15:25:03 crc kubenswrapper[5123]: I1212 15:25:03.754202 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr"] Dec 12 15:25:03 crc kubenswrapper[5123]: I1212 15:25:03.755674 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" podUID="99476b9c-cc9a-4c2b-b789-ec5d59580a87" containerName="route-controller-manager" containerID="cri-o://fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe" gracePeriod=30 Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.205749 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.229757 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-client-ca\") pod \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230002 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99476b9c-cc9a-4c2b-b789-ec5d59580a87-tmp\") pod \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230089 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476b9c-cc9a-4c2b-b789-ec5d59580a87-serving-cert\") pod \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230136 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-config\") pod \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230199 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmhl\" (UniqueName: \"kubernetes.io/projected/99476b9c-cc9a-4c2b-b789-ec5d59580a87-kube-api-access-nvmhl\") pod \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\" (UID: \"99476b9c-cc9a-4c2b-b789-ec5d59580a87\") " Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230328 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-client-ca" (OuterVolumeSpecName: "client-ca") pod "99476b9c-cc9a-4c2b-b789-ec5d59580a87" (UID: "99476b9c-cc9a-4c2b-b789-ec5d59580a87"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230648 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99476b9c-cc9a-4c2b-b789-ec5d59580a87-tmp" (OuterVolumeSpecName: "tmp") pod "99476b9c-cc9a-4c2b-b789-ec5d59580a87" (UID: "99476b9c-cc9a-4c2b-b789-ec5d59580a87"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230935 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99476b9c-cc9a-4c2b-b789-ec5d59580a87-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.230955 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.231100 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-config" (OuterVolumeSpecName: "config") pod "99476b9c-cc9a-4c2b-b789-ec5d59580a87" (UID: "99476b9c-cc9a-4c2b-b789-ec5d59580a87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.241706 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99476b9c-cc9a-4c2b-b789-ec5d59580a87-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99476b9c-cc9a-4c2b-b789-ec5d59580a87" (UID: "99476b9c-cc9a-4c2b-b789-ec5d59580a87"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.241806 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99476b9c-cc9a-4c2b-b789-ec5d59580a87-kube-api-access-nvmhl" (OuterVolumeSpecName: "kube-api-access-nvmhl") pod "99476b9c-cc9a-4c2b-b789-ec5d59580a87" (UID: "99476b9c-cc9a-4c2b-b789-ec5d59580a87"). InnerVolumeSpecName "kube-api-access-nvmhl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.696904 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99476b9c-cc9a-4c2b-b789-ec5d59580a87-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.696953 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99476b9c-cc9a-4c2b-b789-ec5d59580a87-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.696963 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvmhl\" (UniqueName: \"kubernetes.io/projected/99476b9c-cc9a-4c2b-b789-ec5d59580a87-kube-api-access-nvmhl\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.697147 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4"] Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.698048 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99476b9c-cc9a-4c2b-b789-ec5d59580a87" containerName="route-controller-manager" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.698080 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="99476b9c-cc9a-4c2b-b789-ec5d59580a87" containerName="route-controller-manager" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.698096 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.698103 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.698289 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 15:25:04 crc kubenswrapper[5123]: I1212 15:25:04.698312 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="99476b9c-cc9a-4c2b-b789-ec5d59580a87" containerName="route-controller-manager" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.193295 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4"] Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.193541 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.218122 5123 generic.go:358] "Generic (PLEG): container finished" podID="df73cc11-a7c1-4d5a-ac3a-b8a35374238e" containerID="2526f52b038f14a479693699eb89f83bd3f93aeb6be60c0ab0255965f3c2c673" exitCode=0 Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.218425 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" event={"ID":"df73cc11-a7c1-4d5a-ac3a-b8a35374238e","Type":"ContainerDied","Data":"2526f52b038f14a479693699eb89f83bd3f93aeb6be60c0ab0255965f3c2c673"} Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.245110 5123 generic.go:358] "Generic (PLEG): container finished" podID="99476b9c-cc9a-4c2b-b789-ec5d59580a87" containerID="fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe" exitCode=0 Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.245189 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" event={"ID":"99476b9c-cc9a-4c2b-b789-ec5d59580a87","Type":"ContainerDied","Data":"fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe"} Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.245259 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.245294 5123 scope.go:117] "RemoveContainer" containerID="fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.245275 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr" event={"ID":"99476b9c-cc9a-4c2b-b789-ec5d59580a87","Type":"ContainerDied","Data":"b4b37f69ab53f1a4ed1a4d5f01209405423a362fc38b93f333cf274b9cb150d3"} Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.471921 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd04e041-0463-4e2e-8023-35d3e3683e3d-tmp\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.471998 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-client-ca\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.472041 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-config\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.472131 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4b6s\" (UniqueName: \"kubernetes.io/projected/dd04e041-0463-4e2e-8023-35d3e3683e3d-kube-api-access-h4b6s\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.472194 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd04e041-0463-4e2e-8023-35d3e3683e3d-serving-cert\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.533929 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr"] Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.540629 5123 scope.go:117] "RemoveContainer" containerID="fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe" Dec 12 15:25:05 crc kubenswrapper[5123]: E1212 15:25:05.548441 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe\": container with ID starting with fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe not found: ID does not exist" containerID="fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.548519 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe"} err="failed to get container status \"fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe\": rpc error: code = NotFound desc = could not find container \"fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe\": container with ID starting with fe8c17cb8c70c68158d251ea6e9e78e00020a626d6846d902bca9fec91f3b3fe not found: ID does not exist" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.551052 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-648f5757c8-bzrzr"] Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.573387 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4b6s\" (UniqueName: \"kubernetes.io/projected/dd04e041-0463-4e2e-8023-35d3e3683e3d-kube-api-access-h4b6s\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.573481 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd04e041-0463-4e2e-8023-35d3e3683e3d-serving-cert\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.573549 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd04e041-0463-4e2e-8023-35d3e3683e3d-tmp\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.573592 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-client-ca\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.573643 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-config\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.575084 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-client-ca\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.575209 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-config\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.575354 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd04e041-0463-4e2e-8023-35d3e3683e3d-tmp\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.582430 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd04e041-0463-4e2e-8023-35d3e3683e3d-serving-cert\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.604101 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4b6s\" (UniqueName: \"kubernetes.io/projected/dd04e041-0463-4e2e-8023-35d3e3683e3d-kube-api-access-h4b6s\") pod \"route-controller-manager-646759d888-tnkf4\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.648415 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99476b9c-cc9a-4c2b-b789-ec5d59580a87" path="/var/lib/kubelet/pods/99476b9c-cc9a-4c2b-b789-ec5d59580a87/volumes" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.821831 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.837568 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.879030 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-tmp\") pod \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.879145 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-config\") pod \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.879194 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-serving-cert\") pod \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.879312 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2g6l\" (UniqueName: \"kubernetes.io/projected/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-kube-api-access-q2g6l\") pod \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.879353 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-proxy-ca-bundles\") pod \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.879424 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-client-ca\") pod \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\" (UID: \"df73cc11-a7c1-4d5a-ac3a-b8a35374238e\") " Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.881242 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-tmp" (OuterVolumeSpecName: "tmp") pod "df73cc11-a7c1-4d5a-ac3a-b8a35374238e" (UID: "df73cc11-a7c1-4d5a-ac3a-b8a35374238e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.881452 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "df73cc11-a7c1-4d5a-ac3a-b8a35374238e" (UID: "df73cc11-a7c1-4d5a-ac3a-b8a35374238e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.881585 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-client-ca" (OuterVolumeSpecName: "client-ca") pod "df73cc11-a7c1-4d5a-ac3a-b8a35374238e" (UID: "df73cc11-a7c1-4d5a-ac3a-b8a35374238e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.881924 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-config" (OuterVolumeSpecName: "config") pod "df73cc11-a7c1-4d5a-ac3a-b8a35374238e" (UID: "df73cc11-a7c1-4d5a-ac3a-b8a35374238e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.887663 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c48dc655c-6dn2j"] Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.888621 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df73cc11-a7c1-4d5a-ac3a-b8a35374238e" containerName="controller-manager" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.888651 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="df73cc11-a7c1-4d5a-ac3a-b8a35374238e" containerName="controller-manager" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.888760 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="df73cc11-a7c1-4d5a-ac3a-b8a35374238e" containerName="controller-manager" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.893426 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-kube-api-access-q2g6l" (OuterVolumeSpecName: "kube-api-access-q2g6l") pod "df73cc11-a7c1-4d5a-ac3a-b8a35374238e" (UID: "df73cc11-a7c1-4d5a-ac3a-b8a35374238e"). InnerVolumeSpecName "kube-api-access-q2g6l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.893463 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df73cc11-a7c1-4d5a-ac3a-b8a35374238e" (UID: "df73cc11-a7c1-4d5a-ac3a-b8a35374238e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.894878 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c48dc655c-6dn2j"] Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.895076 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983482 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gjhb\" (UniqueName: \"kubernetes.io/projected/8571c9d9-89bb-41c9-9efc-1611a1410275-kube-api-access-6gjhb\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983532 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-proxy-ca-bundles\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983607 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-config\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983650 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8571c9d9-89bb-41c9-9efc-1611a1410275-serving-cert\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983669 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-client-ca\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983707 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8571c9d9-89bb-41c9-9efc-1611a1410275-tmp\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983769 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983786 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983795 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983803 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983811 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q2g6l\" (UniqueName: \"kubernetes.io/projected/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-kube-api-access-q2g6l\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:05 crc kubenswrapper[5123]: I1212 15:25:05.983829 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df73cc11-a7c1-4d5a-ac3a-b8a35374238e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.142049 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8571c9d9-89bb-41c9-9efc-1611a1410275-serving-cert\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.142546 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-client-ca\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.142649 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8571c9d9-89bb-41c9-9efc-1611a1410275-tmp\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.143420 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8571c9d9-89bb-41c9-9efc-1611a1410275-tmp\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.143518 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6gjhb\" (UniqueName: \"kubernetes.io/projected/8571c9d9-89bb-41c9-9efc-1611a1410275-kube-api-access-6gjhb\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.144405 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-proxy-ca-bundles\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.144612 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-config\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.144795 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-client-ca\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.146138 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-config\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.146363 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-proxy-ca-bundles\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.147998 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8571c9d9-89bb-41c9-9efc-1611a1410275-serving-cert\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.167474 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gjhb\" (UniqueName: \"kubernetes.io/projected/8571c9d9-89bb-41c9-9efc-1611a1410275-kube-api-access-6gjhb\") pod \"controller-manager-5c48dc655c-6dn2j\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.221289 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.288069 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.289462 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" event={"ID":"df73cc11-a7c1-4d5a-ac3a-b8a35374238e","Type":"ContainerDied","Data":"14d971e1a6ffc83f55c654016240908946e09336a183fa4ac90ecfbce98cf419"} Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.289578 5123 scope.go:117] "RemoveContainer" containerID="2526f52b038f14a479693699eb89f83bd3f93aeb6be60c0ab0255965f3c2c673" Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.307145 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4"] Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.360083 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd"] Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.370529 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd"] Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.476091 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c48dc655c-6dn2j"] Dec 12 15:25:06 crc kubenswrapper[5123]: W1212 15:25:06.489349 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8571c9d9_89bb_41c9_9efc_1611a1410275.slice/crio-362559ee57886feecddf6929ef638afe4353d1a807ba8b62f66c584f92a67024 WatchSource:0}: Error finding container 362559ee57886feecddf6929ef638afe4353d1a807ba8b62f66c584f92a67024: Status 404 returned error can't find the container with id 362559ee57886feecddf6929ef638afe4353d1a807ba8b62f66c584f92a67024 Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.610790 5123 patch_prober.go:28] interesting pod/controller-manager-5f9ccc8bd6-jfwtd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded" start-of-body= Dec 12 15:25:06 crc kubenswrapper[5123]: I1212 15:25:06.610900 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f9ccc8bd6-jfwtd" podUID="df73cc11-a7c1-4d5a-ac3a-b8a35374238e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.315195 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" event={"ID":"dd04e041-0463-4e2e-8023-35d3e3683e3d","Type":"ContainerStarted","Data":"1f835a2afd147c4ed4a58f549de3d69ee745d8eba0a19152b56a04f224e3f752"} Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.315664 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" event={"ID":"dd04e041-0463-4e2e-8023-35d3e3683e3d","Type":"ContainerStarted","Data":"e80266524bbd9d3ec594c518295d909e87f8045afdaee6493368f472c38f2778"} Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.315936 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.318691 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" event={"ID":"8571c9d9-89bb-41c9-9efc-1611a1410275","Type":"ContainerStarted","Data":"329323b98be1d627b32140ff3b932f324bb393c64f262547791f01adac4d3c54"} Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.318749 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" event={"ID":"8571c9d9-89bb-41c9-9efc-1611a1410275","Type":"ContainerStarted","Data":"362559ee57886feecddf6929ef638afe4353d1a807ba8b62f66c584f92a67024"} Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.320184 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.329182 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.344065 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" podStartSLOduration=4.344034965 podStartE2EDuration="4.344034965s" podCreationTimestamp="2025-12-12 15:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:07.336844397 +0000 UTC m=+336.146796918" watchObservedRunningTime="2025-12-12 15:25:07.344034965 +0000 UTC m=+336.153987476" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.359100 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" podStartSLOduration=4.35907331 podStartE2EDuration="4.35907331s" podCreationTimestamp="2025-12-12 15:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:07.358927187 +0000 UTC m=+336.168879708" watchObservedRunningTime="2025-12-12 15:25:07.35907331 +0000 UTC m=+336.169025821" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.455666 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.649236 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df73cc11-a7c1-4d5a-ac3a-b8a35374238e" path="/var/lib/kubelet/pods/df73cc11-a7c1-4d5a-ac3a-b8a35374238e/volumes" Dec 12 15:25:07 crc kubenswrapper[5123]: I1212 15:25:07.783084 5123 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.109935 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-n874n"] Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.165626 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-n874n"] Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.165970 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.524740 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-registry-tls\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.525165 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44129b26-d88b-42bc-baa6-d2f833fa9a19-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.525248 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44129b26-d88b-42bc-baa6-d2f833fa9a19-registry-certificates\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.525306 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkr99\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-kube-api-access-zkr99\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.525344 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.525401 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44129b26-d88b-42bc-baa6-d2f833fa9a19-trusted-ca\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.525509 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44129b26-d88b-42bc-baa6-d2f833fa9a19-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.525536 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-bound-sa-token\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.578726 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.627009 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-registry-tls\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.627084 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44129b26-d88b-42bc-baa6-d2f833fa9a19-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.627110 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44129b26-d88b-42bc-baa6-d2f833fa9a19-registry-certificates\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.627143 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkr99\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-kube-api-access-zkr99\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.627177 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44129b26-d88b-42bc-baa6-d2f833fa9a19-trusted-ca\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.627298 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44129b26-d88b-42bc-baa6-d2f833fa9a19-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.627326 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-bound-sa-token\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.628457 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44129b26-d88b-42bc-baa6-d2f833fa9a19-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.629751 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44129b26-d88b-42bc-baa6-d2f833fa9a19-trusted-ca\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.629873 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44129b26-d88b-42bc-baa6-d2f833fa9a19-registry-certificates\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.635248 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-registry-tls\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.636080 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44129b26-d88b-42bc-baa6-d2f833fa9a19-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.651471 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-bound-sa-token\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.671872 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkr99\" (UniqueName: \"kubernetes.io/projected/44129b26-d88b-42bc-baa6-d2f833fa9a19-kube-api-access-zkr99\") pod \"image-registry-5d9d95bf5b-n874n\" (UID: \"44129b26-d88b-42bc-baa6-d2f833fa9a19\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:09 crc kubenswrapper[5123]: I1212 15:25:09.793640 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:10 crc kubenswrapper[5123]: I1212 15:25:10.235947 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-n874n"] Dec 12 15:25:10 crc kubenswrapper[5123]: I1212 15:25:10.989451 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" event={"ID":"44129b26-d88b-42bc-baa6-d2f833fa9a19","Type":"ContainerStarted","Data":"436992f9be0361c54d99ea2f5c407d3f0430b99d13b34e74bd36c168fc20ff8e"} Dec 12 15:25:12 crc kubenswrapper[5123]: I1212 15:25:11.999877 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" event={"ID":"44129b26-d88b-42bc-baa6-d2f833fa9a19","Type":"ContainerStarted","Data":"4f7228f27a8e04584148a07b4644a9ac6e10d29caa8bf041fee116ebc40784f9"} Dec 12 15:25:12 crc kubenswrapper[5123]: I1212 15:25:12.001068 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:12 crc kubenswrapper[5123]: I1212 15:25:12.036832 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" podStartSLOduration=3.036796982 podStartE2EDuration="3.036796982s" podCreationTimestamp="2025-12-12 15:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:12.032917399 +0000 UTC m=+340.842869940" watchObservedRunningTime="2025-12-12 15:25:12.036796982 +0000 UTC m=+340.846749503" Dec 12 15:25:14 crc kubenswrapper[5123]: I1212 15:25:14.586406 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 15:25:23 crc kubenswrapper[5123]: I1212 15:25:23.494715 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c48dc655c-6dn2j"] Dec 12 15:25:23 crc kubenswrapper[5123]: I1212 15:25:23.495712 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" podUID="8571c9d9-89bb-41c9-9efc-1611a1410275" containerName="controller-manager" containerID="cri-o://329323b98be1d627b32140ff3b932f324bb393c64f262547791f01adac4d3c54" gracePeriod=30 Dec 12 15:25:23 crc kubenswrapper[5123]: I1212 15:25:23.530661 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4"] Dec 12 15:25:23 crc kubenswrapper[5123]: I1212 15:25:23.531441 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" podUID="dd04e041-0463-4e2e-8023-35d3e3683e3d" containerName="route-controller-manager" containerID="cri-o://1f835a2afd147c4ed4a58f549de3d69ee745d8eba0a19152b56a04f224e3f752" gracePeriod=30 Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.092626 5123 generic.go:358] "Generic (PLEG): container finished" podID="dd04e041-0463-4e2e-8023-35d3e3683e3d" containerID="1f835a2afd147c4ed4a58f549de3d69ee745d8eba0a19152b56a04f224e3f752" exitCode=0 Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.092697 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" event={"ID":"dd04e041-0463-4e2e-8023-35d3e3683e3d","Type":"ContainerDied","Data":"1f835a2afd147c4ed4a58f549de3d69ee745d8eba0a19152b56a04f224e3f752"} Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.106942 5123 generic.go:358] "Generic (PLEG): container finished" podID="8571c9d9-89bb-41c9-9efc-1611a1410275" containerID="329323b98be1d627b32140ff3b932f324bb393c64f262547791f01adac4d3c54" exitCode=0 Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.107045 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" event={"ID":"8571c9d9-89bb-41c9-9efc-1611a1410275","Type":"ContainerDied","Data":"329323b98be1d627b32140ff3b932f324bb393c64f262547791f01adac4d3c54"} Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.484871 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.523127 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8"] Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.523960 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd04e041-0463-4e2e-8023-35d3e3683e3d" containerName="route-controller-manager" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.523984 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd04e041-0463-4e2e-8023-35d3e3683e3d" containerName="route-controller-manager" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.524161 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd04e041-0463-4e2e-8023-35d3e3683e3d" containerName="route-controller-manager" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.529131 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.533636 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8"] Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.578918 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd04e041-0463-4e2e-8023-35d3e3683e3d-serving-cert\") pod \"dd04e041-0463-4e2e-8023-35d3e3683e3d\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579030 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-client-ca\") pod \"dd04e041-0463-4e2e-8023-35d3e3683e3d\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579088 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd04e041-0463-4e2e-8023-35d3e3683e3d-tmp\") pod \"dd04e041-0463-4e2e-8023-35d3e3683e3d\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579151 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-config\") pod \"dd04e041-0463-4e2e-8023-35d3e3683e3d\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579182 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4b6s\" (UniqueName: \"kubernetes.io/projected/dd04e041-0463-4e2e-8023-35d3e3683e3d-kube-api-access-h4b6s\") pod \"dd04e041-0463-4e2e-8023-35d3e3683e3d\" (UID: \"dd04e041-0463-4e2e-8023-35d3e3683e3d\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579571 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-config\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579605 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e97bf728-e15a-4bad-9889-9a19f5847ef9-tmp\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579637 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjn2\" (UniqueName: \"kubernetes.io/projected/e97bf728-e15a-4bad-9889-9a19f5847ef9-kube-api-access-4hjn2\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579700 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e97bf728-e15a-4bad-9889-9a19f5847ef9-serving-cert\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.579729 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-client-ca\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.589177 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-config" (OuterVolumeSpecName: "config") pod "dd04e041-0463-4e2e-8023-35d3e3683e3d" (UID: "dd04e041-0463-4e2e-8023-35d3e3683e3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.589553 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd04e041-0463-4e2e-8023-35d3e3683e3d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd04e041-0463-4e2e-8023-35d3e3683e3d" (UID: "dd04e041-0463-4e2e-8023-35d3e3683e3d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.589946 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd04e041-0463-4e2e-8023-35d3e3683e3d" (UID: "dd04e041-0463-4e2e-8023-35d3e3683e3d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.590284 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd04e041-0463-4e2e-8023-35d3e3683e3d-tmp" (OuterVolumeSpecName: "tmp") pod "dd04e041-0463-4e2e-8023-35d3e3683e3d" (UID: "dd04e041-0463-4e2e-8023-35d3e3683e3d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.599471 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd04e041-0463-4e2e-8023-35d3e3683e3d-kube-api-access-h4b6s" (OuterVolumeSpecName: "kube-api-access-h4b6s") pod "dd04e041-0463-4e2e-8023-35d3e3683e3d" (UID: "dd04e041-0463-4e2e-8023-35d3e3683e3d"). InnerVolumeSpecName "kube-api-access-h4b6s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681653 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e97bf728-e15a-4bad-9889-9a19f5847ef9-serving-cert\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681718 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-client-ca\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681790 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-config\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681806 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e97bf728-e15a-4bad-9889-9a19f5847ef9-tmp\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681850 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjn2\" (UniqueName: \"kubernetes.io/projected/e97bf728-e15a-4bad-9889-9a19f5847ef9-kube-api-access-4hjn2\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681912 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd04e041-0463-4e2e-8023-35d3e3683e3d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681924 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681933 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd04e041-0463-4e2e-8023-35d3e3683e3d-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681945 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd04e041-0463-4e2e-8023-35d3e3683e3d-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.681954 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4b6s\" (UniqueName: \"kubernetes.io/projected/dd04e041-0463-4e2e-8023-35d3e3683e3d-kube-api-access-h4b6s\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.682621 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e97bf728-e15a-4bad-9889-9a19f5847ef9-tmp\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.683544 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-config\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.684402 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-client-ca\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.692391 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e97bf728-e15a-4bad-9889-9a19f5847ef9-serving-cert\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.701983 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjn2\" (UniqueName: \"kubernetes.io/projected/e97bf728-e15a-4bad-9889-9a19f5847ef9-kube-api-access-4hjn2\") pod \"route-controller-manager-5867759586-bgtt8\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.734928 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.779260 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b78d494cf-zngx4"] Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.780087 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8571c9d9-89bb-41c9-9efc-1611a1410275" containerName="controller-manager" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.780117 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="8571c9d9-89bb-41c9-9efc-1611a1410275" containerName="controller-manager" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.780284 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="8571c9d9-89bb-41c9-9efc-1611a1410275" containerName="controller-manager" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.787258 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gjhb\" (UniqueName: \"kubernetes.io/projected/8571c9d9-89bb-41c9-9efc-1611a1410275-kube-api-access-6gjhb\") pod \"8571c9d9-89bb-41c9-9efc-1611a1410275\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.787358 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-proxy-ca-bundles\") pod \"8571c9d9-89bb-41c9-9efc-1611a1410275\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.787543 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-client-ca\") pod \"8571c9d9-89bb-41c9-9efc-1611a1410275\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.787611 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8571c9d9-89bb-41c9-9efc-1611a1410275-serving-cert\") pod \"8571c9d9-89bb-41c9-9efc-1611a1410275\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.787653 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-config\") pod \"8571c9d9-89bb-41c9-9efc-1611a1410275\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.787680 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8571c9d9-89bb-41c9-9efc-1611a1410275-tmp\") pod \"8571c9d9-89bb-41c9-9efc-1611a1410275\" (UID: \"8571c9d9-89bb-41c9-9efc-1611a1410275\") " Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.789083 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8571c9d9-89bb-41c9-9efc-1611a1410275-tmp" (OuterVolumeSpecName: "tmp") pod "8571c9d9-89bb-41c9-9efc-1611a1410275" (UID: "8571c9d9-89bb-41c9-9efc-1611a1410275"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.790014 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8571c9d9-89bb-41c9-9efc-1611a1410275" (UID: "8571c9d9-89bb-41c9-9efc-1611a1410275"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.791509 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-config" (OuterVolumeSpecName: "config") pod "8571c9d9-89bb-41c9-9efc-1611a1410275" (UID: "8571c9d9-89bb-41c9-9efc-1611a1410275"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.793016 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8571c9d9-89bb-41c9-9efc-1611a1410275-kube-api-access-6gjhb" (OuterVolumeSpecName: "kube-api-access-6gjhb") pod "8571c9d9-89bb-41c9-9efc-1611a1410275" (UID: "8571c9d9-89bb-41c9-9efc-1611a1410275"). InnerVolumeSpecName "kube-api-access-6gjhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.793392 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-client-ca" (OuterVolumeSpecName: "client-ca") pod "8571c9d9-89bb-41c9-9efc-1611a1410275" (UID: "8571c9d9-89bb-41c9-9efc-1611a1410275"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.793552 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8571c9d9-89bb-41c9-9efc-1611a1410275-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8571c9d9-89bb-41c9-9efc-1611a1410275" (UID: "8571c9d9-89bb-41c9-9efc-1611a1410275"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.796643 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:24 crc kubenswrapper[5123]: I1212 15:25:24.800085 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b78d494cf-zngx4"] Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.230728 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232358 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-client-ca\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232396 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-serving-cert\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232461 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-proxy-ca-bundles\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232490 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-tmp\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232535 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvxw\" (UniqueName: \"kubernetes.io/projected/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-kube-api-access-tzvxw\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232569 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-config\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232680 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6gjhb\" (UniqueName: \"kubernetes.io/projected/8571c9d9-89bb-41c9-9efc-1611a1410275-kube-api-access-6gjhb\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232697 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232711 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232723 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8571c9d9-89bb-41c9-9efc-1611a1410275-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232735 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8571c9d9-89bb-41c9-9efc-1611a1410275-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.232747 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8571c9d9-89bb-41c9-9efc-1611a1410275-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.249564 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" event={"ID":"dd04e041-0463-4e2e-8023-35d3e3683e3d","Type":"ContainerDied","Data":"e80266524bbd9d3ec594c518295d909e87f8045afdaee6493368f472c38f2778"} Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.249610 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.249651 5123 scope.go:117] "RemoveContainer" containerID="1f835a2afd147c4ed4a58f549de3d69ee745d8eba0a19152b56a04f224e3f752" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.254834 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" event={"ID":"8571c9d9-89bb-41c9-9efc-1611a1410275","Type":"ContainerDied","Data":"362559ee57886feecddf6929ef638afe4353d1a807ba8b62f66c584f92a67024"} Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.254978 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c48dc655c-6dn2j" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.297758 5123 scope.go:117] "RemoveContainer" containerID="329323b98be1d627b32140ff3b932f324bb393c64f262547791f01adac4d3c54" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.325995 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c48dc655c-6dn2j"] Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.333826 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-client-ca\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.333958 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c48dc655c-6dn2j"] Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.333894 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-serving-cert\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.335290 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-client-ca\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.335429 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-proxy-ca-bundles\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.336394 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-proxy-ca-bundles\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.336712 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-tmp\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.337367 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-tmp\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.337505 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tzvxw\" (UniqueName: \"kubernetes.io/projected/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-kube-api-access-tzvxw\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.337559 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-config\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.340575 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4"] Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.340956 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-config\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.343089 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-serving-cert\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.345997 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646759d888-tnkf4"] Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.356101 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzvxw\" (UniqueName: \"kubernetes.io/projected/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-kube-api-access-tzvxw\") pod \"controller-manager-5b78d494cf-zngx4\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.565113 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.650203 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8571c9d9-89bb-41c9-9efc-1611a1410275" path="/var/lib/kubelet/pods/8571c9d9-89bb-41c9-9efc-1611a1410275/volumes" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.651061 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd04e041-0463-4e2e-8023-35d3e3683e3d" path="/var/lib/kubelet/pods/dd04e041-0463-4e2e-8023-35d3e3683e3d/volumes" Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.723343 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8"] Dec 12 15:25:25 crc kubenswrapper[5123]: I1212 15:25:25.808786 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b78d494cf-zngx4"] Dec 12 15:25:25 crc kubenswrapper[5123]: W1212 15:25:25.817769 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ff21f0_0aaf_4e0c_ae45_e7fe22adc4ca.slice/crio-c4fc85e90e6dd0a8e8a5497d7a867a0924c0af6607d545c71f613ef6e03f8868 WatchSource:0}: Error finding container c4fc85e90e6dd0a8e8a5497d7a867a0924c0af6607d545c71f613ef6e03f8868: Status 404 returned error can't find the container with id c4fc85e90e6dd0a8e8a5497d7a867a0924c0af6607d545c71f613ef6e03f8868 Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.263612 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" event={"ID":"e97bf728-e15a-4bad-9889-9a19f5847ef9","Type":"ContainerStarted","Data":"e76b51b6203567410e1b8f1476c5625491fc8a08986ca0e57c2f2b8ee90fc220"} Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.263997 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.264010 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" event={"ID":"e97bf728-e15a-4bad-9889-9a19f5847ef9","Type":"ContainerStarted","Data":"13272fee4958cbe41fc4e212858b1b73f3877583a8feadc89114568d511f8279"} Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.270748 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" event={"ID":"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca","Type":"ContainerStarted","Data":"ba9596176718c2202281763c8fb51e6a84270ccbd4a3c27c30781c48c905f4a3"} Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.270800 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" event={"ID":"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca","Type":"ContainerStarted","Data":"c4fc85e90e6dd0a8e8a5497d7a867a0924c0af6607d545c71f613ef6e03f8868"} Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.271826 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.292418 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" podStartSLOduration=3.292398942 podStartE2EDuration="3.292398942s" podCreationTimestamp="2025-12-12 15:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:26.287350741 +0000 UTC m=+355.097303272" watchObservedRunningTime="2025-12-12 15:25:26.292398942 +0000 UTC m=+355.102351453" Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.311684 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" podStartSLOduration=3.311657931 podStartE2EDuration="3.311657931s" podCreationTimestamp="2025-12-12 15:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:26.307390636 +0000 UTC m=+355.117343167" watchObservedRunningTime="2025-12-12 15:25:26.311657931 +0000 UTC m=+355.121610472" Dec 12 15:25:26 crc kubenswrapper[5123]: I1212 15:25:26.834505 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:27 crc kubenswrapper[5123]: I1212 15:25:27.187642 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.281463 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjqk7"] Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.282658 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fjqk7" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="registry-server" containerID="cri-o://38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc" gracePeriod=30 Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.290374 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbt5r"] Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.290820 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sbt5r" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="registry-server" containerID="cri-o://713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6" gracePeriod=30 Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.299628 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rkcvb"] Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.300082 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" containerID="cri-o://e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15" gracePeriod=30 Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.694830 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-shltm"] Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.694894 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kb524"] Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.695859 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-shltm" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="registry-server" containerID="cri-o://0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876" gracePeriod=30 Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.702621 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkqnl"] Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.702829 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.704292 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkqnl" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="registry-server" containerID="cri-o://892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4" gracePeriod=30 Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.735969 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kb524"] Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.753414 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5a8f453-7300-4884-a6fb-66f2819dfaaf-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.753482 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c5a8f453-7300-4884-a6fb-66f2819dfaaf-tmp\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.753577 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5a8f453-7300-4884-a6fb-66f2819dfaaf-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.753668 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnqm\" (UniqueName: \"kubernetes.io/projected/c5a8f453-7300-4884-a6fb-66f2819dfaaf-kube-api-access-hbnqm\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.860117 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5a8f453-7300-4884-a6fb-66f2819dfaaf-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.860193 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c5a8f453-7300-4884-a6fb-66f2819dfaaf-tmp\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.860392 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5a8f453-7300-4884-a6fb-66f2819dfaaf-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.860620 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnqm\" (UniqueName: \"kubernetes.io/projected/c5a8f453-7300-4884-a6fb-66f2819dfaaf-kube-api-access-hbnqm\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.860991 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c5a8f453-7300-4884-a6fb-66f2819dfaaf-tmp\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.863810 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5a8f453-7300-4884-a6fb-66f2819dfaaf-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.868169 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5a8f453-7300-4884-a6fb-66f2819dfaaf-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: I1212 15:25:31.888097 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnqm\" (UniqueName: \"kubernetes.io/projected/c5a8f453-7300-4884-a6fb-66f2819dfaaf-kube-api-access-hbnqm\") pod \"marketplace-operator-547dbd544d-kb524\" (UID: \"c5a8f453-7300-4884-a6fb-66f2819dfaaf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:31 crc kubenswrapper[5123]: E1212 15:25:31.892258 5123 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8cdb4da_d02c_42f7_9f61_cb5e162d26a7.slice/crio-0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320bf855_399c_4de0_bbbd_8dcdcb5d9e2a.slice/crio-conmon-892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320bf855_399c_4de0_bbbd_8dcdcb5d9e2a.slice/crio-892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4.scope\": RecentStats: unable to find data in memory cache]" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.079275 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.095996 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.144150 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.164668 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjzm2\" (UniqueName: \"kubernetes.io/projected/78a70363-f10e-4d12-8279-c7f7f3b8402b-kube-api-access-xjzm2\") pod \"78a70363-f10e-4d12-8279-c7f7f3b8402b\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.164985 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-catalog-content\") pod \"78a70363-f10e-4d12-8279-c7f7f3b8402b\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.165089 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-utilities\") pod \"78a70363-f10e-4d12-8279-c7f7f3b8402b\" (UID: \"78a70363-f10e-4d12-8279-c7f7f3b8402b\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.170148 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.175253 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a70363-f10e-4d12-8279-c7f7f3b8402b-kube-api-access-xjzm2" (OuterVolumeSpecName: "kube-api-access-xjzm2") pod "78a70363-f10e-4d12-8279-c7f7f3b8402b" (UID: "78a70363-f10e-4d12-8279-c7f7f3b8402b"). InnerVolumeSpecName "kube-api-access-xjzm2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.175921 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-utilities" (OuterVolumeSpecName: "utilities") pod "78a70363-f10e-4d12-8279-c7f7f3b8402b" (UID: "78a70363-f10e-4d12-8279-c7f7f3b8402b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.237559 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267388 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-operator-metrics\") pod \"17ce8feb-99e5-42f3-a808-2dd39bc57377\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267527 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm5mj\" (UniqueName: \"kubernetes.io/projected/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-kube-api-access-bm5mj\") pod \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267564 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-utilities\") pod \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267592 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db5kh\" (UniqueName: \"kubernetes.io/projected/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-kube-api-access-db5kh\") pod \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267637 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17ce8feb-99e5-42f3-a808-2dd39bc57377-tmp\") pod \"17ce8feb-99e5-42f3-a808-2dd39bc57377\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267666 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-catalog-content\") pod \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\" (UID: \"402bc75d-15b2-46d8-9455-d2d8c8c7c47a\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267750 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-utilities\") pod \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267777 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-trusted-ca\") pod \"17ce8feb-99e5-42f3-a808-2dd39bc57377\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267868 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7c6x\" (UniqueName: \"kubernetes.io/projected/17ce8feb-99e5-42f3-a808-2dd39bc57377-kube-api-access-q7c6x\") pod \"17ce8feb-99e5-42f3-a808-2dd39bc57377\" (UID: \"17ce8feb-99e5-42f3-a808-2dd39bc57377\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.267910 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-catalog-content\") pod \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\" (UID: \"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.268232 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.268252 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjzm2\" (UniqueName: \"kubernetes.io/projected/78a70363-f10e-4d12-8279-c7f7f3b8402b-kube-api-access-xjzm2\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.269658 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78a70363-f10e-4d12-8279-c7f7f3b8402b" (UID: "78a70363-f10e-4d12-8279-c7f7f3b8402b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.270311 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-utilities" (OuterVolumeSpecName: "utilities") pod "402bc75d-15b2-46d8-9455-d2d8c8c7c47a" (UID: "402bc75d-15b2-46d8-9455-d2d8c8c7c47a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.270604 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-utilities" (OuterVolumeSpecName: "utilities") pod "f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" (UID: "f8cdb4da-d02c-42f7-9f61-cb5e162d26a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.271492 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "17ce8feb-99e5-42f3-a808-2dd39bc57377" (UID: "17ce8feb-99e5-42f3-a808-2dd39bc57377"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.271569 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ce8feb-99e5-42f3-a808-2dd39bc57377-tmp" (OuterVolumeSpecName: "tmp") pod "17ce8feb-99e5-42f3-a808-2dd39bc57377" (UID: "17ce8feb-99e5-42f3-a808-2dd39bc57377"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.273759 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-kube-api-access-bm5mj" (OuterVolumeSpecName: "kube-api-access-bm5mj") pod "f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" (UID: "f8cdb4da-d02c-42f7-9f61-cb5e162d26a7"). InnerVolumeSpecName "kube-api-access-bm5mj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.274055 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-kube-api-access-db5kh" (OuterVolumeSpecName: "kube-api-access-db5kh") pod "402bc75d-15b2-46d8-9455-d2d8c8c7c47a" (UID: "402bc75d-15b2-46d8-9455-d2d8c8c7c47a"). InnerVolumeSpecName "kube-api-access-db5kh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.274452 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ce8feb-99e5-42f3-a808-2dd39bc57377-kube-api-access-q7c6x" (OuterVolumeSpecName: "kube-api-access-q7c6x") pod "17ce8feb-99e5-42f3-a808-2dd39bc57377" (UID: "17ce8feb-99e5-42f3-a808-2dd39bc57377"). InnerVolumeSpecName "kube-api-access-q7c6x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.276274 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "17ce8feb-99e5-42f3-a808-2dd39bc57377" (UID: "17ce8feb-99e5-42f3-a808-2dd39bc57377"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.279779 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.289130 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" (UID: "f8cdb4da-d02c-42f7-9f61-cb5e162d26a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.333546 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "402bc75d-15b2-46d8-9455-d2d8c8c7c47a" (UID: "402bc75d-15b2-46d8-9455-d2d8c8c7c47a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.368946 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-catalog-content\") pod \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369254 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsrz5\" (UniqueName: \"kubernetes.io/projected/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-kube-api-access-nsrz5\") pod \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369359 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-utilities\") pod \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\" (UID: \"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a\") " Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369679 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7c6x\" (UniqueName: \"kubernetes.io/projected/17ce8feb-99e5-42f3-a808-2dd39bc57377-kube-api-access-q7c6x\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369700 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369710 5123 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369721 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a70363-f10e-4d12-8279-c7f7f3b8402b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369731 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bm5mj\" (UniqueName: \"kubernetes.io/projected/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-kube-api-access-bm5mj\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369742 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369752 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-db5kh\" (UniqueName: \"kubernetes.io/projected/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-kube-api-access-db5kh\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369762 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17ce8feb-99e5-42f3-a808-2dd39bc57377-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369773 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402bc75d-15b2-46d8-9455-d2d8c8c7c47a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369782 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.369789 5123 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17ce8feb-99e5-42f3-a808-2dd39bc57377-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.370919 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-utilities" (OuterVolumeSpecName: "utilities") pod "320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" (UID: "320bf855-399c-4de0-bbbd-8dcdcb5d9e2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.379302 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-kube-api-access-nsrz5" (OuterVolumeSpecName: "kube-api-access-nsrz5") pod "320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" (UID: "320bf855-399c-4de0-bbbd-8dcdcb5d9e2a"). InnerVolumeSpecName "kube-api-access-nsrz5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.479780 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" (UID: "320bf855-399c-4de0-bbbd-8dcdcb5d9e2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.480763 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.480792 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nsrz5\" (UniqueName: \"kubernetes.io/projected/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-kube-api-access-nsrz5\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.480808 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.778817 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kb524"] Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.810114 5123 generic.go:358] "Generic (PLEG): container finished" podID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerID="713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6" exitCode=0 Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.810998 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbt5r" event={"ID":"402bc75d-15b2-46d8-9455-d2d8c8c7c47a","Type":"ContainerDied","Data":"713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.811128 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbt5r" event={"ID":"402bc75d-15b2-46d8-9455-d2d8c8c7c47a","Type":"ContainerDied","Data":"56b1ed5c4799e17bec86d89b50121ffdc3f3db309d13cd5a777be83e2bf8a43e"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.811256 5123 scope.go:117] "RemoveContainer" containerID="713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.811689 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbt5r" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.837534 5123 generic.go:358] "Generic (PLEG): container finished" podID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerID="e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15" exitCode=0 Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.837669 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" event={"ID":"17ce8feb-99e5-42f3-a808-2dd39bc57377","Type":"ContainerDied","Data":"e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.837712 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" event={"ID":"17ce8feb-99e5-42f3-a808-2dd39bc57377","Type":"ContainerDied","Data":"d3531403470831101241a9a7f0ebbf7cb9907bd1a1d83a7c259b96467ec19779"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.837831 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-rkcvb" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.846999 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" event={"ID":"c5a8f453-7300-4884-a6fb-66f2819dfaaf","Type":"ContainerStarted","Data":"093c9f2e38482e401fbfa3418ac54419609f5d005106af01ca0a161bab05dbaf"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.854041 5123 generic.go:358] "Generic (PLEG): container finished" podID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerID="892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4" exitCode=0 Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.854253 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkqnl" event={"ID":"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a","Type":"ContainerDied","Data":"892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.854303 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkqnl" event={"ID":"320bf855-399c-4de0-bbbd-8dcdcb5d9e2a","Type":"ContainerDied","Data":"a9b57ebe925462c47f978d2cff3e2c22255c7a2ca7cfb54f8976a03c006fd74c"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.854442 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkqnl" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.867083 5123 generic.go:358] "Generic (PLEG): container finished" podID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerID="38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc" exitCode=0 Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.867246 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjqk7" event={"ID":"78a70363-f10e-4d12-8279-c7f7f3b8402b","Type":"ContainerDied","Data":"38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.867338 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjqk7" event={"ID":"78a70363-f10e-4d12-8279-c7f7f3b8402b","Type":"ContainerDied","Data":"6258ab63449fccde638e76827424295ac034d2404fbcc7f6880beb215d11fc41"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.867413 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjqk7" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.880789 5123 generic.go:358] "Generic (PLEG): container finished" podID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerID="0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876" exitCode=0 Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.880893 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shltm" event={"ID":"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7","Type":"ContainerDied","Data":"0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.880956 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shltm" event={"ID":"f8cdb4da-d02c-42f7-9f61-cb5e162d26a7","Type":"ContainerDied","Data":"2ab0aff90c5e85fb5fff5087d4db5f0028230fac71d95e907064bbb3ad87537d"} Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.880973 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shltm" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.896144 5123 scope.go:117] "RemoveContainer" containerID="ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.945197 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbt5r"] Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.969896 5123 scope.go:117] "RemoveContainer" containerID="eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.969914 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sbt5r"] Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.985512 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rkcvb"] Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.994767 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rkcvb"] Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.999175 5123 scope.go:117] "RemoveContainer" containerID="713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6" Dec 12 15:25:32 crc kubenswrapper[5123]: E1212 15:25:32.999606 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6\": container with ID starting with 713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6 not found: ID does not exist" containerID="713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.999655 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6"} err="failed to get container status \"713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6\": rpc error: code = NotFound desc = could not find container \"713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6\": container with ID starting with 713b5c76cf2f495bde0301d542706edf1e2bbd20471abca2c9be318cf900a8a6 not found: ID does not exist" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.999681 5123 scope.go:117] "RemoveContainer" containerID="ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86" Dec 12 15:25:32 crc kubenswrapper[5123]: E1212 15:25:32.999853 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86\": container with ID starting with ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86 not found: ID does not exist" containerID="ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.999877 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86"} err="failed to get container status \"ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86\": rpc error: code = NotFound desc = could not find container \"ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86\": container with ID starting with ac1b5fd11f35eb3141c2a83a94e8eb7a9cfa0c5f6e02a4f2f12082950305fe86 not found: ID does not exist" Dec 12 15:25:32 crc kubenswrapper[5123]: I1212 15:25:32.999892 5123 scope.go:117] "RemoveContainer" containerID="eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.000137 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071\": container with ID starting with eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071 not found: ID does not exist" containerID="eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.000161 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071"} err="failed to get container status \"eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071\": rpc error: code = NotFound desc = could not find container \"eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071\": container with ID starting with eb0261e1394c1df75125ea600bb9195f3a80bc3aa9101a3cc6ca496dbe71d071 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.000175 5123 scope.go:117] "RemoveContainer" containerID="e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.001484 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjqk7"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.013971 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fjqk7"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.019120 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkqnl"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.019539 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-n874n" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.022546 5123 scope.go:117] "RemoveContainer" containerID="e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.023112 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15\": container with ID starting with e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15 not found: ID does not exist" containerID="e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.023150 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15"} err="failed to get container status \"e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15\": rpc error: code = NotFound desc = could not find container \"e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15\": container with ID starting with e5594dc84936b908316bb447fdf41d5c1397abc4a9af95fdf3265d1a27d5fe15 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.023200 5123 scope.go:117] "RemoveContainer" containerID="892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.025775 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkqnl"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.030990 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-shltm"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.036630 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-shltm"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.050995 5123 scope.go:117] "RemoveContainer" containerID="77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.079311 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ts2mt"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.104172 5123 scope.go:117] "RemoveContainer" containerID="851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.139534 5123 scope.go:117] "RemoveContainer" containerID="892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.140820 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4\": container with ID starting with 892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4 not found: ID does not exist" containerID="892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.140873 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4"} err="failed to get container status \"892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4\": rpc error: code = NotFound desc = could not find container \"892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4\": container with ID starting with 892567596a2ee965736c8fe6691412873ba2da4ef6b77abea638997fed993de4 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.140906 5123 scope.go:117] "RemoveContainer" containerID="77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.141701 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5\": container with ID starting with 77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5 not found: ID does not exist" containerID="77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.141730 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5"} err="failed to get container status \"77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5\": rpc error: code = NotFound desc = could not find container \"77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5\": container with ID starting with 77c0ddd8665197f028c94c2ed7caea3ceec2a8a57efac9660fb59b3c6ef98fe5 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.141750 5123 scope.go:117] "RemoveContainer" containerID="851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.142256 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217\": container with ID starting with 851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217 not found: ID does not exist" containerID="851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.142334 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217"} err="failed to get container status \"851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217\": rpc error: code = NotFound desc = could not find container \"851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217\": container with ID starting with 851ae05989754b1224c751b772629f90c11c7d25f0df8fff866e189e65de6217 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.142384 5123 scope.go:117] "RemoveContainer" containerID="38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.165076 5123 scope.go:117] "RemoveContainer" containerID="31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.186464 5123 scope.go:117] "RemoveContainer" containerID="d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.209416 5123 scope.go:117] "RemoveContainer" containerID="38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.210311 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc\": container with ID starting with 38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc not found: ID does not exist" containerID="38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.210347 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc"} err="failed to get container status \"38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc\": rpc error: code = NotFound desc = could not find container \"38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc\": container with ID starting with 38f4fb98b77c25f9aa70e039d23e646c91822eb7615f1898a508806e389cbccc not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.210378 5123 scope.go:117] "RemoveContainer" containerID="31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.211000 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733\": container with ID starting with 31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733 not found: ID does not exist" containerID="31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.211041 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733"} err="failed to get container status \"31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733\": rpc error: code = NotFound desc = could not find container \"31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733\": container with ID starting with 31d34288be22fda87b5b38e3694e1fdc9f7cd37d6cc800d3e50607d9a7cd9733 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.211069 5123 scope.go:117] "RemoveContainer" containerID="d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.211814 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd\": container with ID starting with d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd not found: ID does not exist" containerID="d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.211864 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd"} err="failed to get container status \"d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd\": rpc error: code = NotFound desc = could not find container \"d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd\": container with ID starting with d812f912bcaca7aa20084a44e018e53df18c8e6b9494e5a9a9881c25a467fbfd not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.211902 5123 scope.go:117] "RemoveContainer" containerID="0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.235669 5123 scope.go:117] "RemoveContainer" containerID="e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.257976 5123 scope.go:117] "RemoveContainer" containerID="790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.292487 5123 scope.go:117] "RemoveContainer" containerID="0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.294058 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876\": container with ID starting with 0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876 not found: ID does not exist" containerID="0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.294484 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876"} err="failed to get container status \"0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876\": rpc error: code = NotFound desc = could not find container \"0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876\": container with ID starting with 0fb530f9580a642bd8a5f0f8c69b353f2240d6cc25437ea1ef705c382ba87876 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.294518 5123 scope.go:117] "RemoveContainer" containerID="e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.294873 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1\": container with ID starting with e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1 not found: ID does not exist" containerID="e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.294909 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1"} err="failed to get container status \"e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1\": rpc error: code = NotFound desc = could not find container \"e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1\": container with ID starting with e666e99973507b0e82a9bcc9dc23c6459b482cd949330b0548860064f0ceaff1 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.294928 5123 scope.go:117] "RemoveContainer" containerID="790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29" Dec 12 15:25:33 crc kubenswrapper[5123]: E1212 15:25:33.295151 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29\": container with ID starting with 790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29 not found: ID does not exist" containerID="790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.295176 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29"} err="failed to get container status \"790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29\": rpc error: code = NotFound desc = could not find container \"790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29\": container with ID starting with 790c72f7b17342d101e778596657f24cb7f929d807e953f49644bf1833e91e29 not found: ID does not exist" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.490863 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d46lq"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491624 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491656 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491674 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491684 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491697 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491703 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491714 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491721 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491729 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491736 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491744 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491750 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491766 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491771 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491776 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491782 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491792 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491797 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="extract-utilities" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491805 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491810 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491822 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491833 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491847 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491855 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="extract-content" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491867 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491874 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491967 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491985 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.491998 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" containerName="marketplace-operator" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.492008 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.492017 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" containerName="registry-server" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.496357 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.499013 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.503563 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d46lq"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.505428 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdnj\" (UniqueName: \"kubernetes.io/projected/d9dbf7b6-6aed-452d-8398-d8d688899061-kube-api-access-9qdnj\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.505492 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-catalog-content\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.505551 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-utilities\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.607091 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-catalog-content\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.607206 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-utilities\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.607265 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdnj\" (UniqueName: \"kubernetes.io/projected/d9dbf7b6-6aed-452d-8398-d8d688899061-kube-api-access-9qdnj\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.607793 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-catalog-content\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.607832 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-utilities\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.630297 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdnj\" (UniqueName: \"kubernetes.io/projected/d9dbf7b6-6aed-452d-8398-d8d688899061-kube-api-access-9qdnj\") pod \"redhat-marketplace-d46lq\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.909332 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.916365 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ce8feb-99e5-42f3-a808-2dd39bc57377" path="/var/lib/kubelet/pods/17ce8feb-99e5-42f3-a808-2dd39bc57377/volumes" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.929140 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320bf855-399c-4de0-bbbd-8dcdcb5d9e2a" path="/var/lib/kubelet/pods/320bf855-399c-4de0-bbbd-8dcdcb5d9e2a/volumes" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.929955 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402bc75d-15b2-46d8-9455-d2d8c8c7c47a" path="/var/lib/kubelet/pods/402bc75d-15b2-46d8-9455-d2d8c8c7c47a/volumes" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.930627 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a70363-f10e-4d12-8279-c7f7f3b8402b" path="/var/lib/kubelet/pods/78a70363-f10e-4d12-8279-c7f7f3b8402b/volumes" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.931805 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8cdb4da-d02c-42f7-9f61-cb5e162d26a7" path="/var/lib/kubelet/pods/f8cdb4da-d02c-42f7-9f61-cb5e162d26a7/volumes" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.934729 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" event={"ID":"c5a8f453-7300-4884-a6fb-66f2819dfaaf","Type":"ContainerStarted","Data":"73a74670c40fadc557dfd375a66ccf96cebbc881145076e1a7e5c7cc20ac10bc"} Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.936559 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.944955 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bq89"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.957762 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bq89"] Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.958032 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.965061 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.972325 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" podStartSLOduration=2.972290563 podStartE2EDuration="2.972290563s" podCreationTimestamp="2025-12-12 15:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:33.964479386 +0000 UTC m=+362.774431927" watchObservedRunningTime="2025-12-12 15:25:33.972290563 +0000 UTC m=+362.782243074" Dec 12 15:25:33 crc kubenswrapper[5123]: I1212 15:25:33.983409 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kb524" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.012280 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5c0f85-f2ac-41a0-b733-b1a01522a433-utilities\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.012525 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8p2\" (UniqueName: \"kubernetes.io/projected/6e5c0f85-f2ac-41a0-b733-b1a01522a433-kube-api-access-ww8p2\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.012818 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5c0f85-f2ac-41a0-b733-b1a01522a433-catalog-content\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.114040 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5c0f85-f2ac-41a0-b733-b1a01522a433-catalog-content\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.114556 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5c0f85-f2ac-41a0-b733-b1a01522a433-utilities\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.114588 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ww8p2\" (UniqueName: \"kubernetes.io/projected/6e5c0f85-f2ac-41a0-b733-b1a01522a433-kube-api-access-ww8p2\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.114706 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5c0f85-f2ac-41a0-b733-b1a01522a433-catalog-content\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.114859 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5c0f85-f2ac-41a0-b733-b1a01522a433-utilities\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.137824 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww8p2\" (UniqueName: \"kubernetes.io/projected/6e5c0f85-f2ac-41a0-b733-b1a01522a433-kube-api-access-ww8p2\") pod \"certified-operators-4bq89\" (UID: \"6e5c0f85-f2ac-41a0-b733-b1a01522a433\") " pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.317523 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.414785 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d46lq"] Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.760008 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bq89"] Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.973574 5123 generic.go:358] "Generic (PLEG): container finished" podID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerID="51b5a76b8dbaa3a88c351ea90f6f470a4bc68c7a2e27487ebb99ff51270ecb14" exitCode=0 Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.973705 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d46lq" event={"ID":"d9dbf7b6-6aed-452d-8398-d8d688899061","Type":"ContainerDied","Data":"51b5a76b8dbaa3a88c351ea90f6f470a4bc68c7a2e27487ebb99ff51270ecb14"} Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.974296 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d46lq" event={"ID":"d9dbf7b6-6aed-452d-8398-d8d688899061","Type":"ContainerStarted","Data":"d58a3763047181139236a63b33b15fc824ef91238df55642d1f0faae3d69de62"} Dec 12 15:25:34 crc kubenswrapper[5123]: I1212 15:25:34.996879 5123 generic.go:358] "Generic (PLEG): container finished" podID="6e5c0f85-f2ac-41a0-b733-b1a01522a433" containerID="1501333292c388435d41ce97faa099a4dfbc640b3dd2c278b680d9c51e1bd29c" exitCode=0 Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.001025 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bq89" event={"ID":"6e5c0f85-f2ac-41a0-b733-b1a01522a433","Type":"ContainerDied","Data":"1501333292c388435d41ce97faa099a4dfbc640b3dd2c278b680d9c51e1bd29c"} Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.001141 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bq89" event={"ID":"6e5c0f85-f2ac-41a0-b733-b1a01522a433","Type":"ContainerStarted","Data":"6623edcc06366336b8e955091058c4aed0bf779e47c2f23125ff6c598e69ff38"} Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.695767 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28t42"] Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.702701 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.705409 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.708445 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28t42"] Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.764124 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e2845d-76f0-49d6-8489-f7f8302e005c-utilities\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.764306 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e2845d-76f0-49d6-8489-f7f8302e005c-catalog-content\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.764350 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5ptj\" (UniqueName: \"kubernetes.io/projected/a7e2845d-76f0-49d6-8489-f7f8302e005c-kube-api-access-z5ptj\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.865352 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e2845d-76f0-49d6-8489-f7f8302e005c-utilities\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.865419 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e2845d-76f0-49d6-8489-f7f8302e005c-catalog-content\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.865442 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5ptj\" (UniqueName: \"kubernetes.io/projected/a7e2845d-76f0-49d6-8489-f7f8302e005c-kube-api-access-z5ptj\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.867045 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e2845d-76f0-49d6-8489-f7f8302e005c-utilities\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.867317 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e2845d-76f0-49d6-8489-f7f8302e005c-catalog-content\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:35 crc kubenswrapper[5123]: I1212 15:25:35.893525 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5ptj\" (UniqueName: \"kubernetes.io/projected/a7e2845d-76f0-49d6-8489-f7f8302e005c-kube-api-access-z5ptj\") pod \"redhat-operators-28t42\" (UID: \"a7e2845d-76f0-49d6-8489-f7f8302e005c\") " pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.016986 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d46lq" event={"ID":"d9dbf7b6-6aed-452d-8398-d8d688899061","Type":"ContainerStarted","Data":"d1eb7d4829dcd23a5b94205eda19c51d228781c0d07f4c87bb66d7f705570e8b"} Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.050971 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.293484 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shpfq"] Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.299977 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.305439 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.319086 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shpfq"] Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.381313 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83632690-6a7a-4f66-9f4b-a7fd6f11d996-utilities\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.381450 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83632690-6a7a-4f66-9f4b-a7fd6f11d996-catalog-content\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.381514 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfbvc\" (UniqueName: \"kubernetes.io/projected/83632690-6a7a-4f66-9f4b-a7fd6f11d996-kube-api-access-cfbvc\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.483063 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83632690-6a7a-4f66-9f4b-a7fd6f11d996-catalog-content\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.483480 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfbvc\" (UniqueName: \"kubernetes.io/projected/83632690-6a7a-4f66-9f4b-a7fd6f11d996-kube-api-access-cfbvc\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.483593 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83632690-6a7a-4f66-9f4b-a7fd6f11d996-utilities\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.483818 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83632690-6a7a-4f66-9f4b-a7fd6f11d996-catalog-content\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.484053 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83632690-6a7a-4f66-9f4b-a7fd6f11d996-utilities\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.508662 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfbvc\" (UniqueName: \"kubernetes.io/projected/83632690-6a7a-4f66-9f4b-a7fd6f11d996-kube-api-access-cfbvc\") pod \"community-operators-shpfq\" (UID: \"83632690-6a7a-4f66-9f4b-a7fd6f11d996\") " pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.511806 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28t42"] Dec 12 15:25:36 crc kubenswrapper[5123]: I1212 15:25:36.628499 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.127523 5123 generic.go:358] "Generic (PLEG): container finished" podID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerID="d1eb7d4829dcd23a5b94205eda19c51d228781c0d07f4c87bb66d7f705570e8b" exitCode=0 Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.127923 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d46lq" event={"ID":"d9dbf7b6-6aed-452d-8398-d8d688899061","Type":"ContainerDied","Data":"d1eb7d4829dcd23a5b94205eda19c51d228781c0d07f4c87bb66d7f705570e8b"} Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.132046 5123 generic.go:358] "Generic (PLEG): container finished" podID="a7e2845d-76f0-49d6-8489-f7f8302e005c" containerID="ef952a00a63d9bdbbc236c1184a856451393035e6a2c77a28d373182a21a76e4" exitCode=0 Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.132515 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28t42" event={"ID":"a7e2845d-76f0-49d6-8489-f7f8302e005c","Type":"ContainerDied","Data":"ef952a00a63d9bdbbc236c1184a856451393035e6a2c77a28d373182a21a76e4"} Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.132568 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28t42" event={"ID":"a7e2845d-76f0-49d6-8489-f7f8302e005c","Type":"ContainerStarted","Data":"73f4f3ec82c8e8c6fd3474c0b0172f2519999380bf5955df61a3469e5f7ed6d1"} Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.149835 5123 generic.go:358] "Generic (PLEG): container finished" podID="6e5c0f85-f2ac-41a0-b733-b1a01522a433" containerID="317b23e27f40bbb09ae787150a0271079f27c6cb040cf3396dda956d11aaa867" exitCode=0 Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.149898 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bq89" event={"ID":"6e5c0f85-f2ac-41a0-b733-b1a01522a433","Type":"ContainerDied","Data":"317b23e27f40bbb09ae787150a0271079f27c6cb040cf3396dda956d11aaa867"} Dec 12 15:25:37 crc kubenswrapper[5123]: I1212 15:25:37.521540 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shpfq"] Dec 12 15:25:37 crc kubenswrapper[5123]: W1212 15:25:37.535838 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83632690_6a7a_4f66_9f4b_a7fd6f11d996.slice/crio-61829020661a7af14717ace53b91adad6ed5fa53f009c07c62d5c8359634d4dc WatchSource:0}: Error finding container 61829020661a7af14717ace53b91adad6ed5fa53f009c07c62d5c8359634d4dc: Status 404 returned error can't find the container with id 61829020661a7af14717ace53b91adad6ed5fa53f009c07c62d5c8359634d4dc Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.160663 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bq89" event={"ID":"6e5c0f85-f2ac-41a0-b733-b1a01522a433","Type":"ContainerStarted","Data":"454723048c3e10ebe62342247d7b75cfa9d8ffca95ca7ab97e45d25acc673281"} Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.165674 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d46lq" event={"ID":"d9dbf7b6-6aed-452d-8398-d8d688899061","Type":"ContainerStarted","Data":"70441921a44f09ac242b6e473c4812ad71fdbf81bb2efabf28915ce534fd3f11"} Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.168610 5123 generic.go:358] "Generic (PLEG): container finished" podID="83632690-6a7a-4f66-9f4b-a7fd6f11d996" containerID="6669d831b43cb31fd2ceedc5c442d36a1b489105674e3c1935f992c290c081d7" exitCode=0 Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.168756 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shpfq" event={"ID":"83632690-6a7a-4f66-9f4b-a7fd6f11d996","Type":"ContainerDied","Data":"6669d831b43cb31fd2ceedc5c442d36a1b489105674e3c1935f992c290c081d7"} Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.168807 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shpfq" event={"ID":"83632690-6a7a-4f66-9f4b-a7fd6f11d996","Type":"ContainerStarted","Data":"61829020661a7af14717ace53b91adad6ed5fa53f009c07c62d5c8359634d4dc"} Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.172169 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28t42" event={"ID":"a7e2845d-76f0-49d6-8489-f7f8302e005c","Type":"ContainerStarted","Data":"7360779d41b1446ddeb721d942e9cf77dff3bbf7cd3e5036c3fe0660b55fad60"} Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.188040 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bq89" podStartSLOduration=4.326246422 podStartE2EDuration="5.18800846s" podCreationTimestamp="2025-12-12 15:25:33 +0000 UTC" firstStartedPulling="2025-12-12 15:25:35.002847741 +0000 UTC m=+363.812800262" lastFinishedPulling="2025-12-12 15:25:35.864609789 +0000 UTC m=+364.674562300" observedRunningTime="2025-12-12 15:25:38.184259862 +0000 UTC m=+366.994212393" watchObservedRunningTime="2025-12-12 15:25:38.18800846 +0000 UTC m=+366.997960991" Dec 12 15:25:38 crc kubenswrapper[5123]: I1212 15:25:38.238725 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d46lq" podStartSLOduration=4.404278441 podStartE2EDuration="5.238703324s" podCreationTimestamp="2025-12-12 15:25:33 +0000 UTC" firstStartedPulling="2025-12-12 15:25:34.975838067 +0000 UTC m=+363.785790568" lastFinishedPulling="2025-12-12 15:25:35.81026294 +0000 UTC m=+364.620215451" observedRunningTime="2025-12-12 15:25:38.237154015 +0000 UTC m=+367.047106536" watchObservedRunningTime="2025-12-12 15:25:38.238703324 +0000 UTC m=+367.048655835" Dec 12 15:25:39 crc kubenswrapper[5123]: I1212 15:25:39.181745 5123 generic.go:358] "Generic (PLEG): container finished" podID="a7e2845d-76f0-49d6-8489-f7f8302e005c" containerID="7360779d41b1446ddeb721d942e9cf77dff3bbf7cd3e5036c3fe0660b55fad60" exitCode=0 Dec 12 15:25:39 crc kubenswrapper[5123]: I1212 15:25:39.181850 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28t42" event={"ID":"a7e2845d-76f0-49d6-8489-f7f8302e005c","Type":"ContainerDied","Data":"7360779d41b1446ddeb721d942e9cf77dff3bbf7cd3e5036c3fe0660b55fad60"} Dec 12 15:25:40 crc kubenswrapper[5123]: I1212 15:25:40.193296 5123 generic.go:358] "Generic (PLEG): container finished" podID="83632690-6a7a-4f66-9f4b-a7fd6f11d996" containerID="97bcef3d2eebfa4dab3fbef6de034d316aae8acd03756d4200972d3956bf7045" exitCode=0 Dec 12 15:25:40 crc kubenswrapper[5123]: I1212 15:25:40.193414 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shpfq" event={"ID":"83632690-6a7a-4f66-9f4b-a7fd6f11d996","Type":"ContainerDied","Data":"97bcef3d2eebfa4dab3fbef6de034d316aae8acd03756d4200972d3956bf7045"} Dec 12 15:25:40 crc kubenswrapper[5123]: I1212 15:25:40.197530 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28t42" event={"ID":"a7e2845d-76f0-49d6-8489-f7f8302e005c","Type":"ContainerStarted","Data":"1d2ef21e3dccfddf8bef236fdee99aafd528025f1f6ad3f4b57fca901b4306ea"} Dec 12 15:25:40 crc kubenswrapper[5123]: I1212 15:25:40.256494 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28t42" podStartSLOduration=4.609817164 podStartE2EDuration="5.256472458s" podCreationTimestamp="2025-12-12 15:25:35 +0000 UTC" firstStartedPulling="2025-12-12 15:25:37.133617279 +0000 UTC m=+365.943569790" lastFinishedPulling="2025-12-12 15:25:37.780272573 +0000 UTC m=+366.590225084" observedRunningTime="2025-12-12 15:25:40.251329615 +0000 UTC m=+369.061282126" watchObservedRunningTime="2025-12-12 15:25:40.256472458 +0000 UTC m=+369.066424969" Dec 12 15:25:41 crc kubenswrapper[5123]: I1212 15:25:41.207926 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shpfq" event={"ID":"83632690-6a7a-4f66-9f4b-a7fd6f11d996","Type":"ContainerStarted","Data":"2aac77ec682371d8ab53f9ddcd2e063b057dbb185bdaf94156440a5c278434f3"} Dec 12 15:25:41 crc kubenswrapper[5123]: I1212 15:25:41.233666 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shpfq" podStartSLOduration=4.211571858 podStartE2EDuration="5.233641457s" podCreationTimestamp="2025-12-12 15:25:36 +0000 UTC" firstStartedPulling="2025-12-12 15:25:38.170258989 +0000 UTC m=+366.980211500" lastFinishedPulling="2025-12-12 15:25:39.192328588 +0000 UTC m=+368.002281099" observedRunningTime="2025-12-12 15:25:41.232818811 +0000 UTC m=+370.042771342" watchObservedRunningTime="2025-12-12 15:25:41.233641457 +0000 UTC m=+370.043593968" Dec 12 15:25:43 crc kubenswrapper[5123]: I1212 15:25:43.909569 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:43 crc kubenswrapper[5123]: I1212 15:25:43.909998 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:43 crc kubenswrapper[5123]: I1212 15:25:43.910359 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b78d494cf-zngx4"] Dec 12 15:25:43 crc kubenswrapper[5123]: I1212 15:25:43.910744 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" podUID="28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" containerName="controller-manager" containerID="cri-o://ba9596176718c2202281763c8fb51e6a84270ccbd4a3c27c30781c48c905f4a3" gracePeriod=30 Dec 12 15:25:43 crc kubenswrapper[5123]: I1212 15:25:43.930619 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8"] Dec 12 15:25:43 crc kubenswrapper[5123]: I1212 15:25:43.931065 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" podUID="e97bf728-e15a-4bad-9889-9a19f5847ef9" containerName="route-controller-manager" containerID="cri-o://e76b51b6203567410e1b8f1476c5625491fc8a08986ca0e57c2f2b8ee90fc220" gracePeriod=30 Dec 12 15:25:43 crc kubenswrapper[5123]: I1212 15:25:43.987760 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:44 crc kubenswrapper[5123]: I1212 15:25:44.293033 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:25:44 crc kubenswrapper[5123]: I1212 15:25:44.318497 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:44 crc kubenswrapper[5123]: I1212 15:25:44.319665 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:44 crc kubenswrapper[5123]: I1212 15:25:44.375342 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:45 crc kubenswrapper[5123]: I1212 15:25:45.257450 5123 generic.go:358] "Generic (PLEG): container finished" podID="28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" containerID="ba9596176718c2202281763c8fb51e6a84270ccbd4a3c27c30781c48c905f4a3" exitCode=0 Dec 12 15:25:45 crc kubenswrapper[5123]: I1212 15:25:45.257551 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" event={"ID":"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca","Type":"ContainerDied","Data":"ba9596176718c2202281763c8fb51e6a84270ccbd4a3c27c30781c48c905f4a3"} Dec 12 15:25:45 crc kubenswrapper[5123]: I1212 15:25:45.263758 5123 generic.go:358] "Generic (PLEG): container finished" podID="e97bf728-e15a-4bad-9889-9a19f5847ef9" containerID="e76b51b6203567410e1b8f1476c5625491fc8a08986ca0e57c2f2b8ee90fc220" exitCode=0 Dec 12 15:25:45 crc kubenswrapper[5123]: I1212 15:25:45.263920 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" event={"ID":"e97bf728-e15a-4bad-9889-9a19f5847ef9","Type":"ContainerDied","Data":"e76b51b6203567410e1b8f1476c5625491fc8a08986ca0e57c2f2b8ee90fc220"} Dec 12 15:25:45 crc kubenswrapper[5123]: I1212 15:25:45.600179 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bq89" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.052242 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.052719 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.098310 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.162460 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.172330 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.195493 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc"] Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.196712 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" containerName="controller-manager" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.196849 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" containerName="controller-manager" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.196966 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e97bf728-e15a-4bad-9889-9a19f5847ef9" containerName="route-controller-manager" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.197057 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97bf728-e15a-4bad-9889-9a19f5847ef9" containerName="route-controller-manager" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.197321 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" containerName="controller-manager" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.197463 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="e97bf728-e15a-4bad-9889-9a19f5847ef9" containerName="route-controller-manager" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.202803 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.209532 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc"] Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.238178 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8"] Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.250289 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.252529 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8"] Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.263589 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f3156a-f22e-422c-96a4-1043a89c512a-tmp\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.265100 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-config\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.265350 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-proxy-ca-bundles\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.265727 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9rj6\" (UniqueName: \"kubernetes.io/projected/29f3156a-f22e-422c-96a4-1043a89c512a-kube-api-access-s9rj6\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.266019 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f3156a-f22e-422c-96a4-1043a89c512a-serving-cert\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.266149 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-client-ca\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.278045 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" event={"ID":"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca","Type":"ContainerDied","Data":"c4fc85e90e6dd0a8e8a5497d7a867a0924c0af6607d545c71f613ef6e03f8868"} Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.278373 5123 scope.go:117] "RemoveContainer" containerID="ba9596176718c2202281763c8fb51e6a84270ccbd4a3c27c30781c48c905f4a3" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.278758 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b78d494cf-zngx4" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.294935 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.295112 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8" event={"ID":"e97bf728-e15a-4bad-9889-9a19f5847ef9","Type":"ContainerDied","Data":"13272fee4958cbe41fc4e212858b1b73f3877583a8feadc89114568d511f8279"} Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.308929 5123 scope.go:117] "RemoveContainer" containerID="e76b51b6203567410e1b8f1476c5625491fc8a08986ca0e57c2f2b8ee90fc220" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.353196 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28t42" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.367555 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzvxw\" (UniqueName: \"kubernetes.io/projected/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-kube-api-access-tzvxw\") pod \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.367664 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-serving-cert\") pod \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370102 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-client-ca\") pod \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370269 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-client-ca\") pod \"e97bf728-e15a-4bad-9889-9a19f5847ef9\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370375 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e97bf728-e15a-4bad-9889-9a19f5847ef9-serving-cert\") pod \"e97bf728-e15a-4bad-9889-9a19f5847ef9\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370471 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-tmp\") pod \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370551 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e97bf728-e15a-4bad-9889-9a19f5847ef9-tmp\") pod \"e97bf728-e15a-4bad-9889-9a19f5847ef9\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370715 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjn2\" (UniqueName: \"kubernetes.io/projected/e97bf728-e15a-4bad-9889-9a19f5847ef9-kube-api-access-4hjn2\") pod \"e97bf728-e15a-4bad-9889-9a19f5847ef9\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370850 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-config\") pod \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.370943 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-proxy-ca-bundles\") pod \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\" (UID: \"28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.371148 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-config\") pod \"e97bf728-e15a-4bad-9889-9a19f5847ef9\" (UID: \"e97bf728-e15a-4bad-9889-9a19f5847ef9\") " Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.371965 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-client-ca\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.372131 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f3156a-f22e-422c-96a4-1043a89c512a-tmp\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.372190 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcj27\" (UniqueName: \"kubernetes.io/projected/a26ea036-950a-4aab-9164-16424bc02298-kube-api-access-wcj27\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.377337 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-config\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.377502 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-proxy-ca-bundles\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.377672 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9rj6\" (UniqueName: \"kubernetes.io/projected/29f3156a-f22e-422c-96a4-1043a89c512a-kube-api-access-s9rj6\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.377800 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26ea036-950a-4aab-9164-16424bc02298-client-ca\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.377880 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26ea036-950a-4aab-9164-16424bc02298-serving-cert\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.378043 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26ea036-950a-4aab-9164-16424bc02298-config\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.378145 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a26ea036-950a-4aab-9164-16424bc02298-tmp\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.378403 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f3156a-f22e-422c-96a4-1043a89c512a-serving-cert\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.379021 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-config" (OuterVolumeSpecName: "config") pod "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" (UID: "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.379769 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" (UID: "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.380838 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-client-ca" (OuterVolumeSpecName: "client-ca") pod "e97bf728-e15a-4bad-9889-9a19f5847ef9" (UID: "e97bf728-e15a-4bad-9889-9a19f5847ef9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.383934 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f3156a-f22e-422c-96a4-1043a89c512a-tmp\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.385546 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" (UID: "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.385807 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e97bf728-e15a-4bad-9889-9a19f5847ef9-tmp" (OuterVolumeSpecName: "tmp") pod "e97bf728-e15a-4bad-9889-9a19f5847ef9" (UID: "e97bf728-e15a-4bad-9889-9a19f5847ef9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.386059 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-client-ca\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.386410 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-tmp" (OuterVolumeSpecName: "tmp") pod "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" (UID: "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.387802 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-config" (OuterVolumeSpecName: "config") pod "e97bf728-e15a-4bad-9889-9a19f5847ef9" (UID: "e97bf728-e15a-4bad-9889-9a19f5847ef9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.390056 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-proxy-ca-bundles\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.390855 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f3156a-f22e-422c-96a4-1043a89c512a-config\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.667487 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-kube-api-access-tzvxw" (OuterVolumeSpecName: "kube-api-access-tzvxw") pod "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" (UID: "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca"). InnerVolumeSpecName "kube-api-access-tzvxw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.668331 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.669555 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674509 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26ea036-950a-4aab-9164-16424bc02298-config\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674563 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a26ea036-950a-4aab-9164-16424bc02298-tmp\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674689 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wcj27\" (UniqueName: \"kubernetes.io/projected/a26ea036-950a-4aab-9164-16424bc02298-kube-api-access-wcj27\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674767 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26ea036-950a-4aab-9164-16424bc02298-client-ca\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674797 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26ea036-950a-4aab-9164-16424bc02298-serving-cert\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674858 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674873 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tzvxw\" (UniqueName: \"kubernetes.io/projected/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-kube-api-access-tzvxw\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674890 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674901 5123 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e97bf728-e15a-4bad-9889-9a19f5847ef9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674913 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674926 5123 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e97bf728-e15a-4bad-9889-9a19f5847ef9-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674939 5123 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.674950 5123 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.676627 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97bf728-e15a-4bad-9889-9a19f5847ef9-kube-api-access-4hjn2" (OuterVolumeSpecName: "kube-api-access-4hjn2") pod "e97bf728-e15a-4bad-9889-9a19f5847ef9" (UID: "e97bf728-e15a-4bad-9889-9a19f5847ef9"). InnerVolumeSpecName "kube-api-access-4hjn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.676744 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a26ea036-950a-4aab-9164-16424bc02298-tmp\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.677716 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a26ea036-950a-4aab-9164-16424bc02298-config\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.677797 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97bf728-e15a-4bad-9889-9a19f5847ef9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e97bf728-e15a-4bad-9889-9a19f5847ef9" (UID: "e97bf728-e15a-4bad-9889-9a19f5847ef9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.678655 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a26ea036-950a-4aab-9164-16424bc02298-client-ca\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.679206 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f3156a-f22e-422c-96a4-1043a89c512a-serving-cert\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.679415 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" (UID: "28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.682519 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a26ea036-950a-4aab-9164-16424bc02298-serving-cert\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.734656 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9rj6\" (UniqueName: \"kubernetes.io/projected/29f3156a-f22e-422c-96a4-1043a89c512a-kube-api-access-s9rj6\") pod \"controller-manager-55b74dd4d7-pj7wc\" (UID: \"29f3156a-f22e-422c-96a4-1043a89c512a\") " pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.739790 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcj27\" (UniqueName: \"kubernetes.io/projected/a26ea036-950a-4aab-9164-16424bc02298-kube-api-access-wcj27\") pod \"route-controller-manager-55d8c8c8d8-vhwq8\" (UID: \"a26ea036-950a-4aab-9164-16424bc02298\") " pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.740648 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.782164 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hjn2\" (UniqueName: \"kubernetes.io/projected/e97bf728-e15a-4bad-9889-9a19f5847ef9-kube-api-access-4hjn2\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.782280 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.782298 5123 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e97bf728-e15a-4bad-9889-9a19f5847ef9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.823575 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.870661 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:46 crc kubenswrapper[5123]: I1212 15:25:46.987324 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b78d494cf-zngx4"] Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.000119 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b78d494cf-zngx4"] Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.068356 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8"] Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.071177 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5867759586-bgtt8"] Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.271144 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8"] Dec 12 15:25:47 crc kubenswrapper[5123]: W1212 15:25:47.271495 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda26ea036_950a_4aab_9164_16424bc02298.slice/crio-02f510912d1b630024bae66ca7b2e5af2eb7ee3e051e361cc9e410cb20030b29 WatchSource:0}: Error finding container 02f510912d1b630024bae66ca7b2e5af2eb7ee3e051e361cc9e410cb20030b29: Status 404 returned error can't find the container with id 02f510912d1b630024bae66ca7b2e5af2eb7ee3e051e361cc9e410cb20030b29 Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.309004 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" event={"ID":"a26ea036-950a-4aab-9164-16424bc02298","Type":"ContainerStarted","Data":"02f510912d1b630024bae66ca7b2e5af2eb7ee3e051e361cc9e410cb20030b29"} Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.404828 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc"] Dec 12 15:25:47 crc kubenswrapper[5123]: W1212 15:25:47.410446 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f3156a_f22e_422c_96a4_1043a89c512a.slice/crio-2f82dc8e88b3db3d647945146e645421debe9d71fd837a631081d74a611c11eb WatchSource:0}: Error finding container 2f82dc8e88b3db3d647945146e645421debe9d71fd837a631081d74a611c11eb: Status 404 returned error can't find the container with id 2f82dc8e88b3db3d647945146e645421debe9d71fd837a631081d74a611c11eb Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.807590 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca" path="/var/lib/kubelet/pods/28ff21f0-0aaf-4e0c-ae45-e7fe22adc4ca/volumes" Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.808453 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97bf728-e15a-4bad-9889-9a19f5847ef9" path="/var/lib/kubelet/pods/e97bf728-e15a-4bad-9889-9a19f5847ef9/volumes" Dec 12 15:25:47 crc kubenswrapper[5123]: I1212 15:25:47.873632 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shpfq" Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.320631 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" event={"ID":"a26ea036-950a-4aab-9164-16424bc02298","Type":"ContainerStarted","Data":"88247a1dd0020850cd4b8326d029f24720fb3b15a42cb33d36ead9c2cfb93129"} Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.321112 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.325389 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" event={"ID":"29f3156a-f22e-422c-96a4-1043a89c512a","Type":"ContainerStarted","Data":"f0ccd14f48649d09a1b6480744c30348e5e1d4f3335cbe9713e1e1486673c295"} Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.325498 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" event={"ID":"29f3156a-f22e-422c-96a4-1043a89c512a","Type":"ContainerStarted","Data":"2f82dc8e88b3db3d647945146e645421debe9d71fd837a631081d74a611c11eb"} Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.325779 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.346766 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" podStartSLOduration=5.346737701 podStartE2EDuration="5.346737701s" podCreationTimestamp="2025-12-12 15:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:48.343201369 +0000 UTC m=+377.153153890" watchObservedRunningTime="2025-12-12 15:25:48.346737701 +0000 UTC m=+377.156690212" Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.703687 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.728588 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55b74dd4d7-pj7wc" podStartSLOduration=5.728563418 podStartE2EDuration="5.728563418s" podCreationTimestamp="2025-12-12 15:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:25:48.380811429 +0000 UTC m=+377.190763940" watchObservedRunningTime="2025-12-12 15:25:48.728563418 +0000 UTC m=+377.538515919" Dec 12 15:25:48 crc kubenswrapper[5123]: I1212 15:25:48.823506 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55d8c8c8d8-vhwq8" Dec 12 15:25:58 crc kubenswrapper[5123]: I1212 15:25:58.126865 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" podUID="40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" containerName="registry" containerID="cri-o://6185e29ddf1a26fa821be58425f349b04d6f0bcd319ae8026df7f81fed1e5e3f" gracePeriod=30 Dec 12 15:25:58 crc kubenswrapper[5123]: I1212 15:25:58.941212 5123 generic.go:358] "Generic (PLEG): container finished" podID="40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" containerID="6185e29ddf1a26fa821be58425f349b04d6f0bcd319ae8026df7f81fed1e5e3f" exitCode=0 Dec 12 15:25:58 crc kubenswrapper[5123]: I1212 15:25:58.941268 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" event={"ID":"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6","Type":"ContainerDied","Data":"6185e29ddf1a26fa821be58425f349b04d6f0bcd319ae8026df7f81fed1e5e3f"} Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.241343 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263273 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-certificates\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263430 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-tls\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263476 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdwrs\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-kube-api-access-hdwrs\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263561 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-bound-sa-token\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263606 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-ca-trust-extracted\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263866 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263904 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-trusted-ca\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.263949 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-installation-pull-secrets\") pod \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\" (UID: \"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6\") " Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.264760 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.265449 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.273515 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.280552 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.281195 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-kube-api-access-hdwrs" (OuterVolumeSpecName: "kube-api-access-hdwrs") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "kube-api-access-hdwrs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.285916 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.309782 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.311450 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" (UID: "40d826be-27f2-49a2-afa5-a6a9cc8f9bf6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.365346 5123 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.365395 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hdwrs\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-kube-api-access-hdwrs\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.365407 5123 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.365418 5123 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.365426 5123 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.365435 5123 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:25:59 crc kubenswrapper[5123]: I1212 15:25:59.365443 5123 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 15:26:00 crc kubenswrapper[5123]: I1212 15:26:00.059257 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" Dec 12 15:26:00 crc kubenswrapper[5123]: I1212 15:26:00.059205 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ts2mt" event={"ID":"40d826be-27f2-49a2-afa5-a6a9cc8f9bf6","Type":"ContainerDied","Data":"b1af30fc90582061b29e0e25741473074ca9bdb2bf34ac2107c10748b3b3460c"} Dec 12 15:26:00 crc kubenswrapper[5123]: I1212 15:26:00.060100 5123 scope.go:117] "RemoveContainer" containerID="6185e29ddf1a26fa821be58425f349b04d6f0bcd319ae8026df7f81fed1e5e3f" Dec 12 15:26:00 crc kubenswrapper[5123]: I1212 15:26:00.090787 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ts2mt"] Dec 12 15:26:00 crc kubenswrapper[5123]: I1212 15:26:00.100981 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ts2mt"] Dec 12 15:26:01 crc kubenswrapper[5123]: I1212 15:26:01.669763 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" path="/var/lib/kubelet/pods/40d826be-27f2-49a2-afa5-a6a9cc8f9bf6/volumes" Dec 12 15:27:00 crc kubenswrapper[5123]: I1212 15:27:00.902699 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:27:00 crc kubenswrapper[5123]: I1212 15:27:00.903823 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:27:30 crc kubenswrapper[5123]: I1212 15:27:30.902407 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:27:30 crc kubenswrapper[5123]: I1212 15:27:30.903503 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:28:00 crc kubenswrapper[5123]: I1212 15:28:00.902399 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:28:00 crc kubenswrapper[5123]: I1212 15:28:00.903152 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:28:00 crc kubenswrapper[5123]: I1212 15:28:00.903294 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:28:00 crc kubenswrapper[5123]: I1212 15:28:00.904150 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a4b170656df051882c89f0434d221bcac3b53456e6fd91756cfb74e868ebd7d"} pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:28:00 crc kubenswrapper[5123]: I1212 15:28:00.904267 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" containerID="cri-o://9a4b170656df051882c89f0434d221bcac3b53456e6fd91756cfb74e868ebd7d" gracePeriod=600 Dec 12 15:28:01 crc kubenswrapper[5123]: I1212 15:28:01.465708 5123 generic.go:358] "Generic (PLEG): container finished" podID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerID="9a4b170656df051882c89f0434d221bcac3b53456e6fd91756cfb74e868ebd7d" exitCode=0 Dec 12 15:28:01 crc kubenswrapper[5123]: I1212 15:28:01.465798 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerDied","Data":"9a4b170656df051882c89f0434d221bcac3b53456e6fd91756cfb74e868ebd7d"} Dec 12 15:28:01 crc kubenswrapper[5123]: I1212 15:28:01.465879 5123 scope.go:117] "RemoveContainer" containerID="65dc049b4db90d3b590a91a0ba963ce193c4d376d4171d75ddda499d4ad620ff" Dec 12 15:28:02 crc kubenswrapper[5123]: I1212 15:28:02.476667 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"3606974b214ad9834bbb1da3a0fabe6877d1e0ef7f439301b0bf2a0adb538ba5"} Dec 12 15:29:31 crc kubenswrapper[5123]: I1212 15:29:31.902044 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:29:31 crc kubenswrapper[5123]: I1212 15:29:31.903076 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:29:31 crc kubenswrapper[5123]: I1212 15:29:31.931022 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:29:31 crc kubenswrapper[5123]: I1212 15:29:31.931602 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.194250 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj"] Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.195932 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" containerName="registry" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.195969 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" containerName="registry" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.196127 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="40d826be-27f2-49a2-afa5-a6a9cc8f9bf6" containerName="registry" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.244650 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj"] Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.244836 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.250010 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.250390 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.290775 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ef5a53-e311-490f-8a24-67823241e6a5-secret-volume\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.290881 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kt7q\" (UniqueName: \"kubernetes.io/projected/27ef5a53-e311-490f-8a24-67823241e6a5-kube-api-access-4kt7q\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.290943 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ef5a53-e311-490f-8a24-67823241e6a5-config-volume\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.392177 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ef5a53-e311-490f-8a24-67823241e6a5-config-volume\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.392303 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ef5a53-e311-490f-8a24-67823241e6a5-secret-volume\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.392386 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kt7q\" (UniqueName: \"kubernetes.io/projected/27ef5a53-e311-490f-8a24-67823241e6a5-kube-api-access-4kt7q\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.393916 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ef5a53-e311-490f-8a24-67823241e6a5-config-volume\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.401305 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ef5a53-e311-490f-8a24-67823241e6a5-secret-volume\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.414511 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kt7q\" (UniqueName: \"kubernetes.io/projected/27ef5a53-e311-490f-8a24-67823241e6a5-kube-api-access-4kt7q\") pod \"collect-profiles-29425890-p5phj\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.574758 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.971895 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj"] Dec 12 15:30:00 crc kubenswrapper[5123]: I1212 15:30:00.985499 5123 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:30:01 crc kubenswrapper[5123]: I1212 15:30:01.531007 5123 generic.go:358] "Generic (PLEG): container finished" podID="27ef5a53-e311-490f-8a24-67823241e6a5" containerID="2552e3f23c3b31712c33adb9aba05f566741d0235a34bc2efb75549587b7448f" exitCode=0 Dec 12 15:30:01 crc kubenswrapper[5123]: I1212 15:30:01.531140 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" event={"ID":"27ef5a53-e311-490f-8a24-67823241e6a5","Type":"ContainerDied","Data":"2552e3f23c3b31712c33adb9aba05f566741d0235a34bc2efb75549587b7448f"} Dec 12 15:30:01 crc kubenswrapper[5123]: I1212 15:30:01.532648 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" event={"ID":"27ef5a53-e311-490f-8a24-67823241e6a5","Type":"ContainerStarted","Data":"18be95ec67f13a50542590fa51009299a9d6487e08c6eac1956454d7850f83aa"} Dec 12 15:30:02 crc kubenswrapper[5123]: I1212 15:30:02.873970 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:02 crc kubenswrapper[5123]: I1212 15:30:02.957653 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ef5a53-e311-490f-8a24-67823241e6a5-config-volume\") pod \"27ef5a53-e311-490f-8a24-67823241e6a5\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " Dec 12 15:30:02 crc kubenswrapper[5123]: I1212 15:30:02.957846 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kt7q\" (UniqueName: \"kubernetes.io/projected/27ef5a53-e311-490f-8a24-67823241e6a5-kube-api-access-4kt7q\") pod \"27ef5a53-e311-490f-8a24-67823241e6a5\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " Dec 12 15:30:02 crc kubenswrapper[5123]: I1212 15:30:02.957896 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ef5a53-e311-490f-8a24-67823241e6a5-secret-volume\") pod \"27ef5a53-e311-490f-8a24-67823241e6a5\" (UID: \"27ef5a53-e311-490f-8a24-67823241e6a5\") " Dec 12 15:30:02 crc kubenswrapper[5123]: I1212 15:30:02.958606 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27ef5a53-e311-490f-8a24-67823241e6a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "27ef5a53-e311-490f-8a24-67823241e6a5" (UID: "27ef5a53-e311-490f-8a24-67823241e6a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:30:02 crc kubenswrapper[5123]: I1212 15:30:02.965267 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ef5a53-e311-490f-8a24-67823241e6a5-kube-api-access-4kt7q" (OuterVolumeSpecName: "kube-api-access-4kt7q") pod "27ef5a53-e311-490f-8a24-67823241e6a5" (UID: "27ef5a53-e311-490f-8a24-67823241e6a5"). InnerVolumeSpecName "kube-api-access-4kt7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:30:02 crc kubenswrapper[5123]: I1212 15:30:02.965434 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ef5a53-e311-490f-8a24-67823241e6a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27ef5a53-e311-490f-8a24-67823241e6a5" (UID: "27ef5a53-e311-490f-8a24-67823241e6a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:30:03 crc kubenswrapper[5123]: I1212 15:30:03.059975 5123 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ef5a53-e311-490f-8a24-67823241e6a5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:03 crc kubenswrapper[5123]: I1212 15:30:03.060057 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4kt7q\" (UniqueName: \"kubernetes.io/projected/27ef5a53-e311-490f-8a24-67823241e6a5-kube-api-access-4kt7q\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:03 crc kubenswrapper[5123]: I1212 15:30:03.060074 5123 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ef5a53-e311-490f-8a24-67823241e6a5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:30:03 crc kubenswrapper[5123]: I1212 15:30:03.548610 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" event={"ID":"27ef5a53-e311-490f-8a24-67823241e6a5","Type":"ContainerDied","Data":"18be95ec67f13a50542590fa51009299a9d6487e08c6eac1956454d7850f83aa"} Dec 12 15:30:03 crc kubenswrapper[5123]: I1212 15:30:03.548683 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18be95ec67f13a50542590fa51009299a9d6487e08c6eac1956454d7850f83aa" Dec 12 15:30:03 crc kubenswrapper[5123]: I1212 15:30:03.548641 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425890-p5phj" Dec 12 15:30:03 crc kubenswrapper[5123]: I1212 15:30:03.759662 5123 ???:1] "http: TLS handshake error from 192.168.126.11:43228: no serving certificate available for the kubelet" Dec 12 15:30:30 crc kubenswrapper[5123]: I1212 15:30:30.902601 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:30:30 crc kubenswrapper[5123]: I1212 15:30:30.903270 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:31:00 crc kubenswrapper[5123]: I1212 15:31:00.901918 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:31:00 crc kubenswrapper[5123]: I1212 15:31:00.902652 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.006574 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c"] Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.007014 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="kube-rbac-proxy" containerID="cri-o://c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.007187 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="ovnkube-cluster-manager" containerID="cri-o://cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.275165 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7cpz"] Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.276378 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-controller" containerID="cri-o://9359cde708bbd01b68d54b173f27267dbc5df381ffb70ca8189f8f19b2fb3bbc" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.276409 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="northd" containerID="cri-o://142c04aec6ca17e608747ac86af8a88d24797e0d10c03531d9e48b83cfb55471" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.276495 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d44176673eaef06ae636c84b82c9cab9190707d7e960c7579e3c7f42c8738910" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.276594 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="nbdb" containerID="cri-o://13ba6a096a2ba4b0b9afbd50d11eba0d8cdb25e23d1b4b26e18c3201ccf516db" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.276723 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-acl-logging" containerID="cri-o://a9267d2d3e119629cfe5f4eb756093064b1a946d674358269b43bce2e3e9c4bb" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.276409 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-node" containerID="cri-o://002ec6cbd941ba0a26b390b7c87f1fcca86b58149647a279144bdf9a48aba978" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.276619 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="sbdb" containerID="cri-o://4df8f61665a45afd71d2b5f4b119db8cb83b99a47388b68baf7e27ed2c4f2c9f" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.320785 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovnkube-controller" containerID="cri-o://75a1894691d0a31baf40f0164ded851c4ee47384a27e045ba24aa78b7377848f" gracePeriod=30 Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.340364 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.390309 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg"] Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.390975 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27ef5a53-e311-490f-8a24-67823241e6a5" containerName="collect-profiles" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.390990 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ef5a53-e311-490f-8a24-67823241e6a5" containerName="collect-profiles" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.390998 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="kube-rbac-proxy" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.391004 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="kube-rbac-proxy" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.391013 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="ovnkube-cluster-manager" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.391019 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="ovnkube-cluster-manager" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.391120 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="kube-rbac-proxy" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.391129 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerName="ovnkube-cluster-manager" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.391142 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="27ef5a53-e311-490f-8a24-67823241e6a5" containerName="collect-profiles" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.405617 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.466273 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-env-overrides\") pod \"2d82c231-80e9-4268-8ec7-1ae260abe06c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.466413 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n476s\" (UniqueName: \"kubernetes.io/projected/2d82c231-80e9-4268-8ec7-1ae260abe06c-kube-api-access-n476s\") pod \"2d82c231-80e9-4268-8ec7-1ae260abe06c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.466547 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovnkube-config\") pod \"2d82c231-80e9-4268-8ec7-1ae260abe06c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.466582 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovn-control-plane-metrics-cert\") pod \"2d82c231-80e9-4268-8ec7-1ae260abe06c\" (UID: \"2d82c231-80e9-4268-8ec7-1ae260abe06c\") " Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.467097 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2d82c231-80e9-4268-8ec7-1ae260abe06c" (UID: "2d82c231-80e9-4268-8ec7-1ae260abe06c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.467337 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2d82c231-80e9-4268-8ec7-1ae260abe06c" (UID: "2d82c231-80e9-4268-8ec7-1ae260abe06c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.473780 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d82c231-80e9-4268-8ec7-1ae260abe06c-kube-api-access-n476s" (OuterVolumeSpecName: "kube-api-access-n476s") pod "2d82c231-80e9-4268-8ec7-1ae260abe06c" (UID: "2d82c231-80e9-4268-8ec7-1ae260abe06c"). InnerVolumeSpecName "kube-api-access-n476s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.473856 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "2d82c231-80e9-4268-8ec7-1ae260abe06c" (UID: "2d82c231-80e9-4268-8ec7-1ae260abe06c"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.568674 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.568755 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfn2m\" (UniqueName: \"kubernetes.io/projected/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-kube-api-access-xfn2m\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.568785 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.569288 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.569472 5123 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.569513 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n476s\" (UniqueName: \"kubernetes.io/projected/2d82c231-80e9-4268-8ec7-1ae260abe06c-kube-api-access-n476s\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.569531 5123 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.569544 5123 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d82c231-80e9-4268-8ec7-1ae260abe06c-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.671321 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfn2m\" (UniqueName: \"kubernetes.io/projected/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-kube-api-access-xfn2m\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.672386 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.672472 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.672556 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.674863 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.675396 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.678081 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.690431 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfn2m\" (UniqueName: \"kubernetes.io/projected/2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c-kube-api-access-xfn2m\") pod \"ovnkube-control-plane-97c9b6c48-ljrtg\" (UID: \"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:01 crc kubenswrapper[5123]: I1212 15:31:01.746826 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.022703 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" event={"ID":"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c","Type":"ContainerStarted","Data":"3f3bae1a9bfbef97311d1fefa63e0a59860da62ebe9e53d4d257137ddca092e4"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.024945 5123 generic.go:358] "Generic (PLEG): container finished" podID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerID="cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.024994 5123 generic.go:358] "Generic (PLEG): container finished" podID="2d82c231-80e9-4268-8ec7-1ae260abe06c" containerID="c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.025060 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.025085 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" event={"ID":"2d82c231-80e9-4268-8ec7-1ae260abe06c","Type":"ContainerDied","Data":"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.025124 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" event={"ID":"2d82c231-80e9-4268-8ec7-1ae260abe06c","Type":"ContainerDied","Data":"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.025136 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c" event={"ID":"2d82c231-80e9-4268-8ec7-1ae260abe06c","Type":"ContainerDied","Data":"150f9f76efd48d3e98dbe363eb13ee730ec2f286c53954bb5c7dfe4533c7ee72"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.025210 5123 scope.go:117] "RemoveContainer" containerID="cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.028418 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27rm2_3ef15793-fa49-4c37-a355-d4573977e301/kube-multus/0.log" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.028484 5123 generic.go:358] "Generic (PLEG): container finished" podID="3ef15793-fa49-4c37-a355-d4573977e301" containerID="23d144a0239efa382b93533f38644c94c10ca4bc5ce0604670b37be72d669266" exitCode=2 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.028872 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27rm2" event={"ID":"3ef15793-fa49-4c37-a355-d4573977e301","Type":"ContainerDied","Data":"23d144a0239efa382b93533f38644c94c10ca4bc5ce0604670b37be72d669266"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.029843 5123 scope.go:117] "RemoveContainer" containerID="23d144a0239efa382b93533f38644c94c10ca4bc5ce0604670b37be72d669266" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.038079 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7cpz_4ba336c2-0d9e-485a-9785-761f97f2601a/ovn-acl-logging/0.log" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.038631 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7cpz_4ba336c2-0d9e-485a-9785-761f97f2601a/ovn-controller/0.log" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.038993 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="75a1894691d0a31baf40f0164ded851c4ee47384a27e045ba24aa78b7377848f" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039011 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="4df8f61665a45afd71d2b5f4b119db8cb83b99a47388b68baf7e27ed2c4f2c9f" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039017 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="13ba6a096a2ba4b0b9afbd50d11eba0d8cdb25e23d1b4b26e18c3201ccf516db" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039024 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="142c04aec6ca17e608747ac86af8a88d24797e0d10c03531d9e48b83cfb55471" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039030 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="d44176673eaef06ae636c84b82c9cab9190707d7e960c7579e3c7f42c8738910" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039037 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="002ec6cbd941ba0a26b390b7c87f1fcca86b58149647a279144bdf9a48aba978" exitCode=0 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039043 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="a9267d2d3e119629cfe5f4eb756093064b1a946d674358269b43bce2e3e9c4bb" exitCode=143 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039050 5123 generic.go:358] "Generic (PLEG): container finished" podID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerID="9359cde708bbd01b68d54b173f27267dbc5df381ffb70ca8189f8f19b2fb3bbc" exitCode=143 Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039139 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"75a1894691d0a31baf40f0164ded851c4ee47384a27e045ba24aa78b7377848f"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039171 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"4df8f61665a45afd71d2b5f4b119db8cb83b99a47388b68baf7e27ed2c4f2c9f"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039183 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"13ba6a096a2ba4b0b9afbd50d11eba0d8cdb25e23d1b4b26e18c3201ccf516db"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039194 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"142c04aec6ca17e608747ac86af8a88d24797e0d10c03531d9e48b83cfb55471"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039204 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"d44176673eaef06ae636c84b82c9cab9190707d7e960c7579e3c7f42c8738910"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039214 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"002ec6cbd941ba0a26b390b7c87f1fcca86b58149647a279144bdf9a48aba978"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039241 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"a9267d2d3e119629cfe5f4eb756093064b1a946d674358269b43bce2e3e9c4bb"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.039293 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"9359cde708bbd01b68d54b173f27267dbc5df381ffb70ca8189f8f19b2fb3bbc"} Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.057176 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7cpz_4ba336c2-0d9e-485a-9785-761f97f2601a/ovn-acl-logging/0.log" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.058742 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7cpz_4ba336c2-0d9e-485a-9785-761f97f2601a/ovn-controller/0.log" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.060823 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.077179 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c"] Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.083530 5123 scope.go:117] "RemoveContainer" containerID="c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.085972 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kbx8c"] Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.104795 5123 scope.go:117] "RemoveContainer" containerID="cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700" Dec 12 15:31:02 crc kubenswrapper[5123]: E1212 15:31:02.105512 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700\": container with ID starting with cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700 not found: ID does not exist" containerID="cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.105546 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700"} err="failed to get container status \"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700\": rpc error: code = NotFound desc = could not find container \"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700\": container with ID starting with cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700 not found: ID does not exist" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.105574 5123 scope.go:117] "RemoveContainer" containerID="c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531" Dec 12 15:31:02 crc kubenswrapper[5123]: E1212 15:31:02.105832 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531\": container with ID starting with c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531 not found: ID does not exist" containerID="c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.105873 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531"} err="failed to get container status \"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531\": rpc error: code = NotFound desc = could not find container \"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531\": container with ID starting with c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531 not found: ID does not exist" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.105894 5123 scope.go:117] "RemoveContainer" containerID="cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.106124 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700"} err="failed to get container status \"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700\": rpc error: code = NotFound desc = could not find container \"cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700\": container with ID starting with cffee43f21ef2a00dbb34e69ef56df2e8b06dbf3543446948f67e269ba46a700 not found: ID does not exist" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.106139 5123 scope.go:117] "RemoveContainer" containerID="c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.106518 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531"} err="failed to get container status \"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531\": rpc error: code = NotFound desc = could not find container \"c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531\": container with ID starting with c3055f0312f1d22ef2805f6b63e2888982b845f12692ea357dd61f9da4ef1531 not found: ID does not exist" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.132083 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l6v7x"] Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.132925 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovnkube-controller" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.132957 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovnkube-controller" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.132976 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="sbdb" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.132983 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="sbdb" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133004 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133013 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133027 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="northd" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133033 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="northd" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133047 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-controller" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133054 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-controller" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133064 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-acl-logging" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133071 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-acl-logging" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133079 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-node" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133146 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-node" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133160 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="nbdb" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133168 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="nbdb" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133185 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kubecfg-setup" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133192 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kubecfg-setup" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133379 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-controller" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133400 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovn-acl-logging" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133412 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133422 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="nbdb" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133432 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="kube-rbac-proxy-node" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133441 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="sbdb" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133451 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="northd" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.133464 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" containerName="ovnkube-controller" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.141647 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177632 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-config\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177715 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-systemd\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177750 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-var-lib-openvswitch\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177784 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-etc-openvswitch\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177828 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-netd\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177861 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-openvswitch\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177899 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba336c2-0d9e-485a-9785-761f97f2601a-ovn-node-metrics-cert\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177955 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfltr\" (UniqueName: \"kubernetes.io/projected/4ba336c2-0d9e-485a-9785-761f97f2601a-kube-api-access-dfltr\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.177986 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178022 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-log-socket\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178072 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-netns\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178114 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-systemd-units\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178144 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-ovn\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178177 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-kubelet\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178302 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-bin\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178320 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-node-log\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178299 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178379 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-slash\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178443 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-slash" (OuterVolumeSpecName: "host-slash") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178520 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-env-overrides\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178712 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-script-lib\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.178771 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-ovn-kubernetes\") pod \"4ba336c2-0d9e-485a-9785-761f97f2601a\" (UID: \"4ba336c2-0d9e-485a-9785-761f97f2601a\") " Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.180142 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.180721 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.180202 5123 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.181988 5123 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182017 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182037 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182075 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182343 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182098 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182429 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182437 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182462 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182485 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-log-socket" (OuterVolumeSpecName: "log-socket") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182495 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-node-log" (OuterVolumeSpecName: "node-log") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182515 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182532 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.182828 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.183952 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba336c2-0d9e-485a-9785-761f97f2601a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.185658 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba336c2-0d9e-485a-9785-761f97f2601a-kube-api-access-dfltr" (OuterVolumeSpecName: "kube-api-access-dfltr") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "kube-api-access-dfltr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.199395 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4ba336c2-0d9e-485a-9785-761f97f2601a" (UID: "4ba336c2-0d9e-485a-9785-761f97f2601a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284188 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-systemd\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284422 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-etc-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284453 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovnkube-config\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284498 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-systemd-units\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284533 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-slash\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284556 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284596 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284635 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-ovn\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284667 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5r9\" (UniqueName: \"kubernetes.io/projected/20f91104-df44-449c-bdfb-6cbe2b5b757b-kube-api-access-mb5r9\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284703 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-node-log\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284744 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-cni-netd\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.284887 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-kubelet\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285095 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-cni-bin\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285173 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-env-overrides\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285290 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-run-netns\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285364 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovn-node-metrics-cert\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285383 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovnkube-script-lib\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285505 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285675 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-log-socket\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285727 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-var-lib-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285942 5123 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285958 5123 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285968 5123 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.285980 5123 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286005 5123 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286029 5123 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba336c2-0d9e-485a-9785-761f97f2601a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286037 5123 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286047 5123 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286056 5123 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286064 5123 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286089 5123 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba336c2-0d9e-485a-9785-761f97f2601a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286098 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dfltr\" (UniqueName: \"kubernetes.io/projected/4ba336c2-0d9e-485a-9785-761f97f2601a-kube-api-access-dfltr\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286109 5123 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286128 5123 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286158 5123 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286168 5123 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286177 5123 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.286185 5123 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba336c2-0d9e-485a-9785-761f97f2601a-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.387974 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388062 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388096 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-ovn\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388118 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5r9\" (UniqueName: \"kubernetes.io/projected/20f91104-df44-449c-bdfb-6cbe2b5b757b-kube-api-access-mb5r9\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388143 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-node-log\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388328 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-cni-netd\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388391 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388473 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-kubelet\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388421 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-kubelet\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388523 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-node-log\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388547 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-cni-bin\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388574 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-cni-netd\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388553 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-ovn\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388613 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-cni-bin\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388652 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-env-overrides\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388706 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-run-netns\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388764 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovn-node-metrics-cert\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.388935 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389051 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovnkube-script-lib\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389137 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389241 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-log-socket\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389318 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-run-ovn-kubernetes\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389335 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-log-socket\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389360 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-var-lib-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389433 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-systemd\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389438 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-var-lib-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389458 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-run-netns\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389466 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-etc-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389500 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-etc-openvswitch\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389500 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-run-systemd\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389523 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovnkube-config\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389595 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-systemd-units\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389682 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-systemd-units\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389750 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-slash\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.389913 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20f91104-df44-449c-bdfb-6cbe2b5b757b-host-slash\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.390016 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-env-overrides\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.390450 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovnkube-config\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.390978 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovnkube-script-lib\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.395323 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20f91104-df44-449c-bdfb-6cbe2b5b757b-ovn-node-metrics-cert\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.409332 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5r9\" (UniqueName: \"kubernetes.io/projected/20f91104-df44-449c-bdfb-6cbe2b5b757b-kube-api-access-mb5r9\") pod \"ovnkube-node-l6v7x\" (UID: \"20f91104-df44-449c-bdfb-6cbe2b5b757b\") " pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: I1212 15:31:02.516952 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:02 crc kubenswrapper[5123]: W1212 15:31:02.536431 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20f91104_df44_449c_bdfb_6cbe2b5b757b.slice/crio-a7bd88d32e805ff63aaf84b36f982a9ecb30ed6b1c027e8b00986734ba050c37 WatchSource:0}: Error finding container a7bd88d32e805ff63aaf84b36f982a9ecb30ed6b1c027e8b00986734ba050c37: Status 404 returned error can't find the container with id a7bd88d32e805ff63aaf84b36f982a9ecb30ed6b1c027e8b00986734ba050c37 Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.141709 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27rm2_3ef15793-fa49-4c37-a355-d4573977e301/kube-multus/0.log" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.141939 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27rm2" event={"ID":"3ef15793-fa49-4c37-a355-d4573977e301","Type":"ContainerStarted","Data":"e528317ac76742fd05a2d5712168bb675ee554565dc8e2b2b6a43236b102b576"} Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.146276 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7cpz_4ba336c2-0d9e-485a-9785-761f97f2601a/ovn-acl-logging/0.log" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.147436 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7cpz_4ba336c2-0d9e-485a-9785-761f97f2601a/ovn-controller/0.log" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.148095 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" event={"ID":"4ba336c2-0d9e-485a-9785-761f97f2601a","Type":"ContainerDied","Data":"82ac0974ca189f76f1e155d7fbd7c6a6bf806727b3551f8a8457694ea6b14f51"} Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.148154 5123 scope.go:117] "RemoveContainer" containerID="75a1894691d0a31baf40f0164ded851c4ee47384a27e045ba24aa78b7377848f" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.148409 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7cpz" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.164042 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" event={"ID":"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c","Type":"ContainerStarted","Data":"da8a0408078f28ffb958ab6f343b330cdd58216ee1d3b22d9a72aaa6097364c0"} Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.164100 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" event={"ID":"2dd48b16-f0e5-4dd8-ba58-d7b1f51ac78c","Type":"ContainerStarted","Data":"06d3013975c81b3ef5b55de7d0c2a606e7a8bf2bcc326d110b377e3913b09501"} Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.168604 5123 generic.go:358] "Generic (PLEG): container finished" podID="20f91104-df44-449c-bdfb-6cbe2b5b757b" containerID="43e53db436db5dac084653a7ff227ae488d33b40509c2568567a2a838c1e783c" exitCode=0 Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.168740 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerDied","Data":"43e53db436db5dac084653a7ff227ae488d33b40509c2568567a2a838c1e783c"} Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.168767 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"a7bd88d32e805ff63aaf84b36f982a9ecb30ed6b1c027e8b00986734ba050c37"} Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.202055 5123 scope.go:117] "RemoveContainer" containerID="4df8f61665a45afd71d2b5f4b119db8cb83b99a47388b68baf7e27ed2c4f2c9f" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.230248 5123 scope.go:117] "RemoveContainer" containerID="13ba6a096a2ba4b0b9afbd50d11eba0d8cdb25e23d1b4b26e18c3201ccf516db" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.243292 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-ljrtg" podStartSLOduration=2.243207742 podStartE2EDuration="2.243207742s" podCreationTimestamp="2025-12-12 15:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:31:03.237495499 +0000 UTC m=+692.047448040" watchObservedRunningTime="2025-12-12 15:31:03.243207742 +0000 UTC m=+692.053160243" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.269968 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7cpz"] Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.279413 5123 scope.go:117] "RemoveContainer" containerID="142c04aec6ca17e608747ac86af8a88d24797e0d10c03531d9e48b83cfb55471" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.285388 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7cpz"] Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.308414 5123 scope.go:117] "RemoveContainer" containerID="d44176673eaef06ae636c84b82c9cab9190707d7e960c7579e3c7f42c8738910" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.324558 5123 scope.go:117] "RemoveContainer" containerID="002ec6cbd941ba0a26b390b7c87f1fcca86b58149647a279144bdf9a48aba978" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.344425 5123 scope.go:117] "RemoveContainer" containerID="a9267d2d3e119629cfe5f4eb756093064b1a946d674358269b43bce2e3e9c4bb" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.362429 5123 scope.go:117] "RemoveContainer" containerID="9359cde708bbd01b68d54b173f27267dbc5df381ffb70ca8189f8f19b2fb3bbc" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.385447 5123 scope.go:117] "RemoveContainer" containerID="55fa7b3e014bc9c796e0cba7b0e5a3ec4c3cf5650a0149ba77bf1970705c94a6" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.649659 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d82c231-80e9-4268-8ec7-1ae260abe06c" path="/var/lib/kubelet/pods/2d82c231-80e9-4268-8ec7-1ae260abe06c/volumes" Dec 12 15:31:03 crc kubenswrapper[5123]: I1212 15:31:03.651320 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba336c2-0d9e-485a-9785-761f97f2601a" path="/var/lib/kubelet/pods/4ba336c2-0d9e-485a-9785-761f97f2601a/volumes" Dec 12 15:31:04 crc kubenswrapper[5123]: I1212 15:31:04.182995 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"8b173e13be9b57a1573fdab9e88f5a4da4ba3d0cfde5e989472fa0d5503feb99"} Dec 12 15:31:04 crc kubenswrapper[5123]: I1212 15:31:04.183067 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"f18418083eaa151f20b65e948e7f888e5b2d668f016e34089f5155ac9ed78ae2"} Dec 12 15:31:04 crc kubenswrapper[5123]: I1212 15:31:04.183088 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"3f9626d9858be1798b962882ecce9d7f09438c38c041426b3d483d5e47752724"} Dec 12 15:31:04 crc kubenswrapper[5123]: I1212 15:31:04.183101 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"34ea3d862bee6f7c30cf1e2706b8873cfbd708bceb418cbec757b2cb28d33358"} Dec 12 15:31:04 crc kubenswrapper[5123]: I1212 15:31:04.183112 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"47739f1b962266398daded4eb804fe56485b822b673213bcc9838c110d1614d0"} Dec 12 15:31:05 crc kubenswrapper[5123]: I1212 15:31:05.194513 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"683307e087bdd62d088b79f01575ef083fcb7c4242e56432ff90ba2736a00507"} Dec 12 15:31:07 crc kubenswrapper[5123]: I1212 15:31:07.215694 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"f0b8a449601eff405ce350f73f5233a378818765d0c5643a999ed1aa959a1e1c"} Dec 12 15:31:10 crc kubenswrapper[5123]: I1212 15:31:10.266981 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" event={"ID":"20f91104-df44-449c-bdfb-6cbe2b5b757b","Type":"ContainerStarted","Data":"be42b669b7f1b49646b89b428e9c5ad3714d17ec9011301b81e044dd6d05c41f"} Dec 12 15:31:10 crc kubenswrapper[5123]: I1212 15:31:10.267807 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:10 crc kubenswrapper[5123]: I1212 15:31:10.267830 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:10 crc kubenswrapper[5123]: I1212 15:31:10.267840 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:10 crc kubenswrapper[5123]: I1212 15:31:10.302290 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:10 crc kubenswrapper[5123]: I1212 15:31:10.306041 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:31:10 crc kubenswrapper[5123]: I1212 15:31:10.311149 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" podStartSLOduration=8.311114518 podStartE2EDuration="8.311114518s" podCreationTimestamp="2025-12-12 15:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:31:10.30556602 +0000 UTC m=+699.115518551" watchObservedRunningTime="2025-12-12 15:31:10.311114518 +0000 UTC m=+699.121067029" Dec 12 15:31:30 crc kubenswrapper[5123]: I1212 15:31:30.902952 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:31:30 crc kubenswrapper[5123]: I1212 15:31:30.903733 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:31:30 crc kubenswrapper[5123]: I1212 15:31:30.903818 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:31:30 crc kubenswrapper[5123]: I1212 15:31:30.904806 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3606974b214ad9834bbb1da3a0fabe6877d1e0ef7f439301b0bf2a0adb538ba5"} pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:31:30 crc kubenswrapper[5123]: I1212 15:31:30.904885 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" containerID="cri-o://3606974b214ad9834bbb1da3a0fabe6877d1e0ef7f439301b0bf2a0adb538ba5" gracePeriod=600 Dec 12 15:31:31 crc kubenswrapper[5123]: I1212 15:31:31.764136 5123 generic.go:358] "Generic (PLEG): container finished" podID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerID="3606974b214ad9834bbb1da3a0fabe6877d1e0ef7f439301b0bf2a0adb538ba5" exitCode=0 Dec 12 15:31:31 crc kubenswrapper[5123]: I1212 15:31:31.764210 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerDied","Data":"3606974b214ad9834bbb1da3a0fabe6877d1e0ef7f439301b0bf2a0adb538ba5"} Dec 12 15:31:31 crc kubenswrapper[5123]: I1212 15:31:31.764636 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"b8b31bee9a490187d699071ec78132456a8a603d815d3195aabc642b4b346b89"} Dec 12 15:31:31 crc kubenswrapper[5123]: I1212 15:31:31.764669 5123 scope.go:117] "RemoveContainer" containerID="9a4b170656df051882c89f0434d221bcac3b53456e6fd91756cfb74e868ebd7d" Dec 12 15:31:42 crc kubenswrapper[5123]: I1212 15:31:42.617945 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l6v7x" Dec 12 15:32:29 crc kubenswrapper[5123]: I1212 15:32:29.808823 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d46lq"] Dec 12 15:32:29 crc kubenswrapper[5123]: I1212 15:32:29.810001 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d46lq" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="registry-server" containerID="cri-o://70441921a44f09ac242b6e473c4812ad71fdbf81bb2efabf28915ce534fd3f11" gracePeriod=30 Dec 12 15:32:30 crc kubenswrapper[5123]: I1212 15:32:30.207705 5123 generic.go:358] "Generic (PLEG): container finished" podID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerID="70441921a44f09ac242b6e473c4812ad71fdbf81bb2efabf28915ce534fd3f11" exitCode=0 Dec 12 15:32:30 crc kubenswrapper[5123]: I1212 15:32:30.207790 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d46lq" event={"ID":"d9dbf7b6-6aed-452d-8398-d8d688899061","Type":"ContainerDied","Data":"70441921a44f09ac242b6e473c4812ad71fdbf81bb2efabf28915ce534fd3f11"} Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.068561 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.185757 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-catalog-content\") pod \"d9dbf7b6-6aed-452d-8398-d8d688899061\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.186830 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-utilities\") pod \"d9dbf7b6-6aed-452d-8398-d8d688899061\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.189764 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qdnj\" (UniqueName: \"kubernetes.io/projected/d9dbf7b6-6aed-452d-8398-d8d688899061-kube-api-access-9qdnj\") pod \"d9dbf7b6-6aed-452d-8398-d8d688899061\" (UID: \"d9dbf7b6-6aed-452d-8398-d8d688899061\") " Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.189512 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-utilities" (OuterVolumeSpecName: "utilities") pod "d9dbf7b6-6aed-452d-8398-d8d688899061" (UID: "d9dbf7b6-6aed-452d-8398-d8d688899061"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.235492 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9dbf7b6-6aed-452d-8398-d8d688899061-kube-api-access-9qdnj" (OuterVolumeSpecName: "kube-api-access-9qdnj") pod "d9dbf7b6-6aed-452d-8398-d8d688899061" (UID: "d9dbf7b6-6aed-452d-8398-d8d688899061"). InnerVolumeSpecName "kube-api-access-9qdnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.239255 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d46lq" event={"ID":"d9dbf7b6-6aed-452d-8398-d8d688899061","Type":"ContainerDied","Data":"d58a3763047181139236a63b33b15fc824ef91238df55642d1f0faae3d69de62"} Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.239341 5123 scope.go:117] "RemoveContainer" containerID="70441921a44f09ac242b6e473c4812ad71fdbf81bb2efabf28915ce534fd3f11" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.239457 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9dbf7b6-6aed-452d-8398-d8d688899061" (UID: "d9dbf7b6-6aed-452d-8398-d8d688899061"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.239575 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d46lq" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.270303 5123 scope.go:117] "RemoveContainer" containerID="d1eb7d4829dcd23a5b94205eda19c51d228781c0d07f4c87bb66d7f705570e8b" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.289556 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d46lq"] Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.292303 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.292335 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9dbf7b6-6aed-452d-8398-d8d688899061-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.292347 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9qdnj\" (UniqueName: \"kubernetes.io/projected/d9dbf7b6-6aed-452d-8398-d8d688899061-kube-api-access-9qdnj\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.294337 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d46lq"] Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.314112 5123 scope.go:117] "RemoveContainer" containerID="51b5a76b8dbaa3a88c351ea90f6f470a4bc68c7a2e27487ebb99ff51270ecb14" Dec 12 15:32:31 crc kubenswrapper[5123]: I1212 15:32:31.744026 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" path="/var/lib/kubelet/pods/d9dbf7b6-6aed-452d-8398-d8d688899061/volumes" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.694177 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w"] Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.696648 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="registry-server" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.696820 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="registry-server" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.696937 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="extract-utilities" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.697025 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="extract-utilities" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.697147 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="extract-content" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.697257 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="extract-content" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.697500 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9dbf7b6-6aed-452d-8398-d8d688899061" containerName="registry-server" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.724213 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w"] Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.725267 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.727918 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.808093 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.808550 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx8bc\" (UniqueName: \"kubernetes.io/projected/fade4659-af9d-481d-a3c6-e9b7c0909308-kube-api-access-bx8bc\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.808745 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.910030 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.910102 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bx8bc\" (UniqueName: \"kubernetes.io/projected/fade4659-af9d-481d-a3c6-e9b7c0909308-kube-api-access-bx8bc\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.910188 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.910972 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.911308 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:33 crc kubenswrapper[5123]: I1212 15:32:33.934530 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx8bc\" (UniqueName: \"kubernetes.io/projected/fade4659-af9d-481d-a3c6-e9b7c0909308-kube-api-access-bx8bc\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:34 crc kubenswrapper[5123]: I1212 15:32:34.045917 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:34 crc kubenswrapper[5123]: I1212 15:32:34.338344 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w"] Dec 12 15:32:35 crc kubenswrapper[5123]: I1212 15:32:35.276071 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" event={"ID":"fade4659-af9d-481d-a3c6-e9b7c0909308","Type":"ContainerStarted","Data":"a18053770da36fecfdef14ef1e573b6f612846e235e634719da1feec85b6cb83"} Dec 12 15:32:35 crc kubenswrapper[5123]: I1212 15:32:35.276475 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" event={"ID":"fade4659-af9d-481d-a3c6-e9b7c0909308","Type":"ContainerStarted","Data":"334d712b8946d55f805b70ab134bdfda89740fb6535e07c43e3c95a9363cb035"} Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.283752 5123 generic.go:358] "Generic (PLEG): container finished" podID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerID="a18053770da36fecfdef14ef1e573b6f612846e235e634719da1feec85b6cb83" exitCode=0 Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.283826 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" event={"ID":"fade4659-af9d-481d-a3c6-e9b7c0909308","Type":"ContainerDied","Data":"a18053770da36fecfdef14ef1e573b6f612846e235e634719da1feec85b6cb83"} Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.688671 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gzcpb"] Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.913569 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzcpb"] Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.913868 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.974748 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-utilities\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.974811 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-catalog-content\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:36 crc kubenswrapper[5123]: I1212 15:32:36.974841 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vhh\" (UniqueName: \"kubernetes.io/projected/22184a26-4a56-48c5-9e60-51dcd636efcf-kube-api-access-n7vhh\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.076792 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-utilities\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.076874 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-catalog-content\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.076911 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vhh\" (UniqueName: \"kubernetes.io/projected/22184a26-4a56-48c5-9e60-51dcd636efcf-kube-api-access-n7vhh\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.077401 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-utilities\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.077462 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-catalog-content\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.096442 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vhh\" (UniqueName: \"kubernetes.io/projected/22184a26-4a56-48c5-9e60-51dcd636efcf-kube-api-access-n7vhh\") pod \"redhat-operators-gzcpb\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.230833 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:37 crc kubenswrapper[5123]: I1212 15:32:37.558192 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzcpb"] Dec 12 15:32:38 crc kubenswrapper[5123]: I1212 15:32:38.298105 5123 generic.go:358] "Generic (PLEG): container finished" podID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerID="f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15" exitCode=0 Dec 12 15:32:38 crc kubenswrapper[5123]: I1212 15:32:38.298232 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzcpb" event={"ID":"22184a26-4a56-48c5-9e60-51dcd636efcf","Type":"ContainerDied","Data":"f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15"} Dec 12 15:32:38 crc kubenswrapper[5123]: I1212 15:32:38.298613 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzcpb" event={"ID":"22184a26-4a56-48c5-9e60-51dcd636efcf","Type":"ContainerStarted","Data":"19a162bf9d0f04615c8996d463f82a51d6ad2f8420b7e23250acdbcbcb76004c"} Dec 12 15:32:39 crc kubenswrapper[5123]: I1212 15:32:39.314979 5123 generic.go:358] "Generic (PLEG): container finished" podID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerID="bb03fd4867349e47d18d48f167520618287f14a243916886e725bbc1a20efc81" exitCode=0 Dec 12 15:32:39 crc kubenswrapper[5123]: I1212 15:32:39.315112 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" event={"ID":"fade4659-af9d-481d-a3c6-e9b7c0909308","Type":"ContainerDied","Data":"bb03fd4867349e47d18d48f167520618287f14a243916886e725bbc1a20efc81"} Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.387110 5123 generic.go:358] "Generic (PLEG): container finished" podID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerID="10696ec28760d7f41aa276259b638881bd9f0b328835b08356990328d4c6ca60" exitCode=0 Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.387690 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" event={"ID":"fade4659-af9d-481d-a3c6-e9b7c0909308","Type":"ContainerDied","Data":"10696ec28760d7f41aa276259b638881bd9f0b328835b08356990328d4c6ca60"} Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.392524 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzcpb" event={"ID":"22184a26-4a56-48c5-9e60-51dcd636efcf","Type":"ContainerStarted","Data":"4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269"} Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.475159 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd"] Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.661743 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd"] Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.661996 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.681606 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzr8\" (UniqueName: \"kubernetes.io/projected/760e8827-777b-4859-b6e6-7a76e7b91284-kube-api-access-whzr8\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.693062 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.693180 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.794556 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.794631 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.794729 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whzr8\" (UniqueName: \"kubernetes.io/projected/760e8827-777b-4859-b6e6-7a76e7b91284-kube-api-access-whzr8\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.796346 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.796411 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:40 crc kubenswrapper[5123]: I1212 15:32:40.819351 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzr8\" (UniqueName: \"kubernetes.io/projected/760e8827-777b-4859-b6e6-7a76e7b91284-kube-api-access-whzr8\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.114999 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.695534 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd"] Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.823209 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.853004 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx8bc\" (UniqueName: \"kubernetes.io/projected/fade4659-af9d-481d-a3c6-e9b7c0909308-kube-api-access-bx8bc\") pod \"fade4659-af9d-481d-a3c6-e9b7c0909308\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.853617 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-bundle\") pod \"fade4659-af9d-481d-a3c6-e9b7c0909308\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.853757 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-util\") pod \"fade4659-af9d-481d-a3c6-e9b7c0909308\" (UID: \"fade4659-af9d-481d-a3c6-e9b7c0909308\") " Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.933912 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-bundle" (OuterVolumeSpecName: "bundle") pod "fade4659-af9d-481d-a3c6-e9b7c0909308" (UID: "fade4659-af9d-481d-a3c6-e9b7c0909308"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.940940 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-util" (OuterVolumeSpecName: "util") pod "fade4659-af9d-481d-a3c6-e9b7c0909308" (UID: "fade4659-af9d-481d-a3c6-e9b7c0909308"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.941465 5123 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.941494 5123 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fade4659-af9d-481d-a3c6-e9b7c0909308-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:41 crc kubenswrapper[5123]: I1212 15:32:41.954562 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fade4659-af9d-481d-a3c6-e9b7c0909308-kube-api-access-bx8bc" (OuterVolumeSpecName: "kube-api-access-bx8bc") pod "fade4659-af9d-481d-a3c6-e9b7c0909308" (UID: "fade4659-af9d-481d-a3c6-e9b7c0909308"). InnerVolumeSpecName "kube-api-access-bx8bc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.043211 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bx8bc\" (UniqueName: \"kubernetes.io/projected/fade4659-af9d-481d-a3c6-e9b7c0909308-kube-api-access-bx8bc\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.475135 5123 generic.go:358] "Generic (PLEG): container finished" podID="760e8827-777b-4859-b6e6-7a76e7b91284" containerID="228dc3f21a33102524e09ced93dc2ab497dbb8b1460c67c02d206ae793c7e653" exitCode=0 Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.475577 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" event={"ID":"760e8827-777b-4859-b6e6-7a76e7b91284","Type":"ContainerDied","Data":"228dc3f21a33102524e09ced93dc2ab497dbb8b1460c67c02d206ae793c7e653"} Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.475702 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" event={"ID":"760e8827-777b-4859-b6e6-7a76e7b91284","Type":"ContainerStarted","Data":"04d1c8a40b43110352f8f8b1dc33805e48799092a98ef895cffdd7cdf39f8d93"} Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.479956 5123 generic.go:358] "Generic (PLEG): container finished" podID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerID="4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269" exitCode=0 Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.480148 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzcpb" event={"ID":"22184a26-4a56-48c5-9e60-51dcd636efcf","Type":"ContainerDied","Data":"4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269"} Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.484564 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" event={"ID":"fade4659-af9d-481d-a3c6-e9b7c0909308","Type":"ContainerDied","Data":"334d712b8946d55f805b70ab134bdfda89740fb6535e07c43e3c95a9363cb035"} Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.484596 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210xgk2w" Dec 12 15:32:42 crc kubenswrapper[5123]: I1212 15:32:42.484647 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334d712b8946d55f805b70ab134bdfda89740fb6535e07c43e3c95a9363cb035" Dec 12 15:32:43 crc kubenswrapper[5123]: I1212 15:32:43.494124 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzcpb" event={"ID":"22184a26-4a56-48c5-9e60-51dcd636efcf","Type":"ContainerStarted","Data":"c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2"} Dec 12 15:32:43 crc kubenswrapper[5123]: I1212 15:32:43.518672 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gzcpb" podStartSLOduration=6.575802966 podStartE2EDuration="7.518639915s" podCreationTimestamp="2025-12-12 15:32:36 +0000 UTC" firstStartedPulling="2025-12-12 15:32:38.299856461 +0000 UTC m=+787.109808982" lastFinishedPulling="2025-12-12 15:32:39.24269342 +0000 UTC m=+788.052645931" observedRunningTime="2025-12-12 15:32:43.515693682 +0000 UTC m=+792.325646203" watchObservedRunningTime="2025-12-12 15:32:43.518639915 +0000 UTC m=+792.328592426" Dec 12 15:32:44 crc kubenswrapper[5123]: I1212 15:32:44.501841 5123 generic.go:358] "Generic (PLEG): container finished" podID="760e8827-777b-4859-b6e6-7a76e7b91284" containerID="9758da41d6d3dc97f6ddcc4aa816777d12017045b19116bf8013e7b21423c0d6" exitCode=0 Dec 12 15:32:44 crc kubenswrapper[5123]: I1212 15:32:44.501907 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" event={"ID":"760e8827-777b-4859-b6e6-7a76e7b91284","Type":"ContainerDied","Data":"9758da41d6d3dc97f6ddcc4aa816777d12017045b19116bf8013e7b21423c0d6"} Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.083913 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd"] Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.085302 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerName="util" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.085330 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerName="util" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.085342 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerName="extract" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.085350 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerName="extract" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.085395 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerName="pull" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.085405 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerName="pull" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.085550 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="fade4659-af9d-481d-a3c6-e9b7c0909308" containerName="extract" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.336137 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd"] Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.336189 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rtzqw"] Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.336432 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.351952 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.352042 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7bkb\" (UniqueName: \"kubernetes.io/projected/bd877609-a269-4a6f-a64d-d671332d8496-kube-api-access-z7bkb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.352076 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.455465 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.455695 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7bkb\" (UniqueName: \"kubernetes.io/projected/bd877609-a269-4a6f-a64d-d671332d8496-kube-api-access-z7bkb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.455780 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.456490 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.456882 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.630520 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7bkb\" (UniqueName: \"kubernetes.io/projected/bd877609-a269-4a6f-a64d-d671332d8496-kube-api-access-z7bkb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.656618 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.714413 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" event={"ID":"760e8827-777b-4859-b6e6-7a76e7b91284","Type":"ContainerStarted","Data":"ad00854be5e9a240ed98edb958c77e1861821d209476eeb752a19accb7701b5f"} Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.714827 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.725028 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtzqw"] Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.753570 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" podStartSLOduration=4.731018005 podStartE2EDuration="5.753550097s" podCreationTimestamp="2025-12-12 15:32:40 +0000 UTC" firstStartedPulling="2025-12-12 15:32:42.478320546 +0000 UTC m=+791.288273067" lastFinishedPulling="2025-12-12 15:32:43.500852648 +0000 UTC m=+792.310805159" observedRunningTime="2025-12-12 15:32:45.752562926 +0000 UTC m=+794.562515447" watchObservedRunningTime="2025-12-12 15:32:45.753550097 +0000 UTC m=+794.563502608" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.915107 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-catalog-content\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.915549 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-utilities\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:45 crc kubenswrapper[5123]: I1212 15:32:45.915605 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxprx\" (UniqueName: \"kubernetes.io/projected/980b7482-44df-44b6-933e-085997e6ac3d-kube-api-access-bxprx\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.017014 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-utilities\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.017111 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxprx\" (UniqueName: \"kubernetes.io/projected/980b7482-44df-44b6-933e-085997e6ac3d-kube-api-access-bxprx\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.017172 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-catalog-content\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.017816 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-utilities\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.017837 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-catalog-content\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.039107 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd"] Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.071749 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxprx\" (UniqueName: \"kubernetes.io/projected/980b7482-44df-44b6-933e-085997e6ac3d-kube-api-access-bxprx\") pod \"certified-operators-rtzqw\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.335460 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.643986 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" event={"ID":"bd877609-a269-4a6f-a64d-d671332d8496","Type":"ContainerStarted","Data":"5479ad2a90467f118d74d9736420bb6016306530c14aed506d3697be7226266a"} Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.684248 5123 generic.go:358] "Generic (PLEG): container finished" podID="760e8827-777b-4859-b6e6-7a76e7b91284" containerID="ad00854be5e9a240ed98edb958c77e1861821d209476eeb752a19accb7701b5f" exitCode=0 Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.684371 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" event={"ID":"760e8827-777b-4859-b6e6-7a76e7b91284","Type":"ContainerDied","Data":"ad00854be5e9a240ed98edb958c77e1861821d209476eeb752a19accb7701b5f"} Dec 12 15:32:46 crc kubenswrapper[5123]: I1212 15:32:46.769484 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtzqw"] Dec 12 15:32:47 crc kubenswrapper[5123]: I1212 15:32:47.232074 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:47 crc kubenswrapper[5123]: I1212 15:32:47.232183 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:47 crc kubenswrapper[5123]: I1212 15:32:47.691562 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerStarted","Data":"23b411d703f3eaaff4a72b858c8fa204241cb7952deb85e49ba6aa8fa591751b"} Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.277789 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.321537 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzcpb" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="registry-server" probeResult="failure" output=< Dec 12 15:32:48 crc kubenswrapper[5123]: timeout: failed to connect service ":50051" within 1s Dec 12 15:32:48 crc kubenswrapper[5123]: > Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.464144 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-util\") pod \"760e8827-777b-4859-b6e6-7a76e7b91284\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.464269 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whzr8\" (UniqueName: \"kubernetes.io/projected/760e8827-777b-4859-b6e6-7a76e7b91284-kube-api-access-whzr8\") pod \"760e8827-777b-4859-b6e6-7a76e7b91284\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.464389 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-bundle\") pod \"760e8827-777b-4859-b6e6-7a76e7b91284\" (UID: \"760e8827-777b-4859-b6e6-7a76e7b91284\") " Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.465409 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-bundle" (OuterVolumeSpecName: "bundle") pod "760e8827-777b-4859-b6e6-7a76e7b91284" (UID: "760e8827-777b-4859-b6e6-7a76e7b91284"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.483455 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760e8827-777b-4859-b6e6-7a76e7b91284-kube-api-access-whzr8" (OuterVolumeSpecName: "kube-api-access-whzr8") pod "760e8827-777b-4859-b6e6-7a76e7b91284" (UID: "760e8827-777b-4859-b6e6-7a76e7b91284"). InnerVolumeSpecName "kube-api-access-whzr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.489377 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-util" (OuterVolumeSpecName: "util") pod "760e8827-777b-4859-b6e6-7a76e7b91284" (UID: "760e8827-777b-4859-b6e6-7a76e7b91284"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.569384 5123 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.569437 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-whzr8\" (UniqueName: \"kubernetes.io/projected/760e8827-777b-4859-b6e6-7a76e7b91284-kube-api-access-whzr8\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.569452 5123 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/760e8827-777b-4859-b6e6-7a76e7b91284-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.716188 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" event={"ID":"760e8827-777b-4859-b6e6-7a76e7b91284","Type":"ContainerDied","Data":"04d1c8a40b43110352f8f8b1dc33805e48799092a98ef895cffdd7cdf39f8d93"} Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.716269 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d1c8a40b43110352f8f8b1dc33805e48799092a98ef895cffdd7cdf39f8d93" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.717445 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lpd" Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.720242 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerStarted","Data":"2af96c167d6fedf0eb40a42dda74a11bd6e5231a31202e941e10a4704d84d22f"} Dec 12 15:32:48 crc kubenswrapper[5123]: I1212 15:32:48.723023 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" event={"ID":"bd877609-a269-4a6f-a64d-d671332d8496","Type":"ContainerStarted","Data":"3f1c850dced860636b92ebc1d57c5dd62ba0880deca84ac2de5fc89d76e3c8e5"} Dec 12 15:32:49 crc kubenswrapper[5123]: I1212 15:32:49.735683 5123 generic.go:358] "Generic (PLEG): container finished" podID="bd877609-a269-4a6f-a64d-d671332d8496" containerID="3f1c850dced860636b92ebc1d57c5dd62ba0880deca84ac2de5fc89d76e3c8e5" exitCode=0 Dec 12 15:32:49 crc kubenswrapper[5123]: I1212 15:32:49.735808 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" event={"ID":"bd877609-a269-4a6f-a64d-d671332d8496","Type":"ContainerDied","Data":"3f1c850dced860636b92ebc1d57c5dd62ba0880deca84ac2de5fc89d76e3c8e5"} Dec 12 15:32:49 crc kubenswrapper[5123]: I1212 15:32:49.738126 5123 generic.go:358] "Generic (PLEG): container finished" podID="980b7482-44df-44b6-933e-085997e6ac3d" containerID="2af96c167d6fedf0eb40a42dda74a11bd6e5231a31202e941e10a4704d84d22f" exitCode=0 Dec 12 15:32:49 crc kubenswrapper[5123]: I1212 15:32:49.738246 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerDied","Data":"2af96c167d6fedf0eb40a42dda74a11bd6e5231a31202e941e10a4704d84d22f"} Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.532594 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-lldf4"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.539054 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="760e8827-777b-4859-b6e6-7a76e7b91284" containerName="util" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.539085 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e8827-777b-4859-b6e6-7a76e7b91284" containerName="util" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.539107 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="760e8827-777b-4859-b6e6-7a76e7b91284" containerName="pull" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.539113 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e8827-777b-4859-b6e6-7a76e7b91284" containerName="pull" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.539140 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="760e8827-777b-4859-b6e6-7a76e7b91284" containerName="extract" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.539149 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e8827-777b-4859-b6e6-7a76e7b91284" containerName="extract" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.539298 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="760e8827-777b-4859-b6e6-7a76e7b91284" containerName="extract" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.611983 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-lldf4"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.612031 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.612234 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.617628 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-2r6tc\"" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.617828 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.617903 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.625557 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.630535 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.630683 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.630767 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.633816 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-hlqgt\"" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.635257 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.644861 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.705387 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28c13967-6f73-4e0c-885b-686531415517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7\" (UID: \"28c13967-6f73-4e0c-885b-686531415517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.705453 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bf98e51-d2f0-43b3-ba71-e34be352c480-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd\" (UID: \"4bf98e51-d2f0-43b3-ba71-e34be352c480\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.705492 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28c13967-6f73-4e0c-885b-686531415517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7\" (UID: \"28c13967-6f73-4e0c-885b-686531415517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.705522 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wz7\" (UniqueName: \"kubernetes.io/projected/8348759e-72f9-43ee-b572-437af5053bf6-kube-api-access-k4wz7\") pod \"obo-prometheus-operator-86648f486b-lldf4\" (UID: \"8348759e-72f9-43ee-b572-437af5053bf6\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.705560 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bf98e51-d2f0-43b3-ba71-e34be352c480-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd\" (UID: \"4bf98e51-d2f0-43b3-ba71-e34be352c480\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.805299 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-6g959"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.806515 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28c13967-6f73-4e0c-885b-686531415517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7\" (UID: \"28c13967-6f73-4e0c-885b-686531415517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.806694 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bf98e51-d2f0-43b3-ba71-e34be352c480-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd\" (UID: \"4bf98e51-d2f0-43b3-ba71-e34be352c480\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.806736 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28c13967-6f73-4e0c-885b-686531415517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7\" (UID: \"28c13967-6f73-4e0c-885b-686531415517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.806770 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wz7\" (UniqueName: \"kubernetes.io/projected/8348759e-72f9-43ee-b572-437af5053bf6-kube-api-access-k4wz7\") pod \"obo-prometheus-operator-86648f486b-lldf4\" (UID: \"8348759e-72f9-43ee-b572-437af5053bf6\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.807229 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bf98e51-d2f0-43b3-ba71-e34be352c480-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd\" (UID: \"4bf98e51-d2f0-43b3-ba71-e34be352c480\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.819086 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28c13967-6f73-4e0c-885b-686531415517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7\" (UID: \"28c13967-6f73-4e0c-885b-686531415517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.824170 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bf98e51-d2f0-43b3-ba71-e34be352c480-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd\" (UID: \"4bf98e51-d2f0-43b3-ba71-e34be352c480\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.826816 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28c13967-6f73-4e0c-885b-686531415517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7\" (UID: \"28c13967-6f73-4e0c-885b-686531415517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.842463 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bf98e51-d2f0-43b3-ba71-e34be352c480-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd\" (UID: \"4bf98e51-d2f0-43b3-ba71-e34be352c480\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.843319 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wz7\" (UniqueName: \"kubernetes.io/projected/8348759e-72f9-43ee-b572-437af5053bf6-kube-api-access-k4wz7\") pod \"obo-prometheus-operator-86648f486b-lldf4\" (UID: \"8348759e-72f9-43ee-b572-437af5053bf6\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.862641 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-6g959"] Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.862831 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.869154 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-gdg8p\"" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.869456 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.913197 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/860be1ea-57ae-4773-a62e-871c9365127a-observability-operator-tls\") pod \"observability-operator-78c97476f4-6g959\" (UID: \"860be1ea-57ae-4773-a62e-871c9365127a\") " pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.913355 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qjcz\" (UniqueName: \"kubernetes.io/projected/860be1ea-57ae-4773-a62e-871c9365127a-kube-api-access-9qjcz\") pod \"observability-operator-78c97476f4-6g959\" (UID: \"860be1ea-57ae-4773-a62e-871c9365127a\") " pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:53 crc kubenswrapper[5123]: I1212 15:32:53.975568 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-n2dxz"] Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.019796 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qjcz\" (UniqueName: \"kubernetes.io/projected/860be1ea-57ae-4773-a62e-871c9365127a-kube-api-access-9qjcz\") pod \"observability-operator-78c97476f4-6g959\" (UID: \"860be1ea-57ae-4773-a62e-871c9365127a\") " pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.019944 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/860be1ea-57ae-4773-a62e-871c9365127a-observability-operator-tls\") pod \"observability-operator-78c97476f4-6g959\" (UID: \"860be1ea-57ae-4773-a62e-871c9365127a\") " pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.021393 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.043709 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.046032 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qjcz\" (UniqueName: \"kubernetes.io/projected/860be1ea-57ae-4773-a62e-871c9365127a-kube-api-access-9qjcz\") pod \"observability-operator-78c97476f4-6g959\" (UID: \"860be1ea-57ae-4773-a62e-871c9365127a\") " pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.153266 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.154200 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/860be1ea-57ae-4773-a62e-871c9365127a-observability-operator-tls\") pod \"observability-operator-78c97476f4-6g959\" (UID: \"860be1ea-57ae-4773-a62e-871c9365127a\") " pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.202369 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.698366 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerStarted","Data":"732f9febfd30fe500bb732c751be907d56053268e2814f0c4fa44ac95161b65f"} Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.699094 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-n2dxz"] Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.698648 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.705240 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-878z4\"" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.901016 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2w6h\" (UniqueName: \"kubernetes.io/projected/266da63e-0a12-457d-a587-e5e6857fbee0-kube-api-access-b2w6h\") pod \"perses-operator-68bdb49cbf-n2dxz\" (UID: \"266da63e-0a12-457d-a587-e5e6857fbee0\") " pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:54 crc kubenswrapper[5123]: I1212 15:32:54.901105 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/266da63e-0a12-457d-a587-e5e6857fbee0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-n2dxz\" (UID: \"266da63e-0a12-457d-a587-e5e6857fbee0\") " pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.003298 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2w6h\" (UniqueName: \"kubernetes.io/projected/266da63e-0a12-457d-a587-e5e6857fbee0-kube-api-access-b2w6h\") pod \"perses-operator-68bdb49cbf-n2dxz\" (UID: \"266da63e-0a12-457d-a587-e5e6857fbee0\") " pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.003381 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/266da63e-0a12-457d-a587-e5e6857fbee0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-n2dxz\" (UID: \"266da63e-0a12-457d-a587-e5e6857fbee0\") " pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.005148 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/266da63e-0a12-457d-a587-e5e6857fbee0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-n2dxz\" (UID: \"266da63e-0a12-457d-a587-e5e6857fbee0\") " pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.009385 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd"] Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.037999 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2w6h\" (UniqueName: \"kubernetes.io/projected/266da63e-0a12-457d-a587-e5e6857fbee0-kube-api-access-b2w6h\") pod \"perses-operator-68bdb49cbf-n2dxz\" (UID: \"266da63e-0a12-457d-a587-e5e6857fbee0\") " pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.044954 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-lldf4"] Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.189885 5123 generic.go:358] "Generic (PLEG): container finished" podID="980b7482-44df-44b6-933e-085997e6ac3d" containerID="732f9febfd30fe500bb732c751be907d56053268e2814f0c4fa44ac95161b65f" exitCode=0 Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.189977 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerDied","Data":"732f9febfd30fe500bb732c751be907d56053268e2814f0c4fa44ac95161b65f"} Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.191542 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" event={"ID":"4bf98e51-d2f0-43b3-ba71-e34be352c480","Type":"ContainerStarted","Data":"2af849333393ad114202cf9edc8867fe370f16a8f5b6827a929795ef2623c77c"} Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.195594 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" event={"ID":"8348759e-72f9-43ee-b572-437af5053bf6","Type":"ContainerStarted","Data":"2c4b52e71f9dc440be5aab0780d01461cc6518c7ecd7e89e87d71b050fbb10a8"} Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.270903 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.533343 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-6g959"] Dec 12 15:32:55 crc kubenswrapper[5123]: I1212 15:32:55.605311 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7"] Dec 12 15:32:56 crc kubenswrapper[5123]: I1212 15:32:56.224431 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerStarted","Data":"e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2"} Dec 12 15:32:56 crc kubenswrapper[5123]: I1212 15:32:56.237725 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" event={"ID":"28c13967-6f73-4e0c-885b-686531415517","Type":"ContainerStarted","Data":"3859801cd7ec0b94b2c826693e50863f4deae13f9f35750791cc0c58ca716c50"} Dec 12 15:32:56 crc kubenswrapper[5123]: I1212 15:32:56.248528 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-6g959" event={"ID":"860be1ea-57ae-4773-a62e-871c9365127a","Type":"ContainerStarted","Data":"bf168f1171aa3406857fc80ff4ba13ff5fa2772cfcefd00346a97083390390db"} Dec 12 15:32:56 crc kubenswrapper[5123]: I1212 15:32:56.285309 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-n2dxz"] Dec 12 15:32:56 crc kubenswrapper[5123]: I1212 15:32:56.299076 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rtzqw" podStartSLOduration=8.074439964 podStartE2EDuration="11.299043739s" podCreationTimestamp="2025-12-12 15:32:45 +0000 UTC" firstStartedPulling="2025-12-12 15:32:49.739058837 +0000 UTC m=+798.549011348" lastFinishedPulling="2025-12-12 15:32:52.963662612 +0000 UTC m=+801.773615123" observedRunningTime="2025-12-12 15:32:56.282357756 +0000 UTC m=+805.092310277" watchObservedRunningTime="2025-12-12 15:32:56.299043739 +0000 UTC m=+805.108996250" Dec 12 15:32:56 crc kubenswrapper[5123]: W1212 15:32:56.313149 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod266da63e_0a12_457d_a587_e5e6857fbee0.slice/crio-55a254f373c5df4ddf05d96395713db21bc130eb3a6d886c30dbd3f40bc1e2da WatchSource:0}: Error finding container 55a254f373c5df4ddf05d96395713db21bc130eb3a6d886c30dbd3f40bc1e2da: Status 404 returned error can't find the container with id 55a254f373c5df4ddf05d96395713db21bc130eb3a6d886c30dbd3f40bc1e2da Dec 12 15:32:56 crc kubenswrapper[5123]: I1212 15:32:56.338371 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:56 crc kubenswrapper[5123]: I1212 15:32:56.341998 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:32:57 crc kubenswrapper[5123]: I1212 15:32:57.271300 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" event={"ID":"266da63e-0a12-457d-a587-e5e6857fbee0","Type":"ContainerStarted","Data":"55a254f373c5df4ddf05d96395713db21bc130eb3a6d886c30dbd3f40bc1e2da"} Dec 12 15:32:57 crc kubenswrapper[5123]: I1212 15:32:57.329907 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:57 crc kubenswrapper[5123]: I1212 15:32:57.447594 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:32:57 crc kubenswrapper[5123]: I1212 15:32:57.575487 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rtzqw" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="registry-server" probeResult="failure" output=< Dec 12 15:32:57 crc kubenswrapper[5123]: timeout: failed to connect service ":50051" within 1s Dec 12 15:32:57 crc kubenswrapper[5123]: > Dec 12 15:32:57 crc kubenswrapper[5123]: I1212 15:32:57.998466 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7df86779b6-jgqwz"] Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.008538 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.013334 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.013619 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.013765 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-fdv4f\"" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.021010 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.036659 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7df86779b6-jgqwz"] Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.068748 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c7ms4"] Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.075099 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.339788 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f41d4bb-7885-4478-b1e3-31744af98ede-apiservice-cert\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.339945 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f41d4bb-7885-4478-b1e3-31744af98ede-webhook-cert\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.340002 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ll6\" (UniqueName: \"kubernetes.io/projected/8f41d4bb-7885-4478-b1e3-31744af98ede-kube-api-access-c2ll6\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.398824 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c7ms4"] Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.441787 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f41d4bb-7885-4478-b1e3-31744af98ede-webhook-cert\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.441899 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c2ll6\" (UniqueName: \"kubernetes.io/projected/8f41d4bb-7885-4478-b1e3-31744af98ede-kube-api-access-c2ll6\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.441984 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-catalog-content\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.442067 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbts\" (UniqueName: \"kubernetes.io/projected/dbc081ef-e1e7-4976-8234-fe6a1929df17-kube-api-access-nhbts\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.442103 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f41d4bb-7885-4478-b1e3-31744af98ede-apiservice-cert\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.442142 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-utilities\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.467015 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f41d4bb-7885-4478-b1e3-31744af98ede-apiservice-cert\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.470137 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f41d4bb-7885-4478-b1e3-31744af98ede-webhook-cert\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.476131 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2ll6\" (UniqueName: \"kubernetes.io/projected/8f41d4bb-7885-4478-b1e3-31744af98ede-kube-api-access-c2ll6\") pod \"elastic-operator-7df86779b6-jgqwz\" (UID: \"8f41d4bb-7885-4478-b1e3-31744af98ede\") " pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.543505 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-catalog-content\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.543609 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhbts\" (UniqueName: \"kubernetes.io/projected/dbc081ef-e1e7-4976-8234-fe6a1929df17-kube-api-access-nhbts\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.543654 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-utilities\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.545410 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-catalog-content\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.545655 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-utilities\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.864182 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhbts\" (UniqueName: \"kubernetes.io/projected/dbc081ef-e1e7-4976-8234-fe6a1929df17-kube-api-access-nhbts\") pod \"community-operators-c7ms4\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.864973 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" Dec 12 15:32:58 crc kubenswrapper[5123]: I1212 15:32:58.865315 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:33:00 crc kubenswrapper[5123]: I1212 15:33:00.237235 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c7ms4"] Dec 12 15:33:00 crc kubenswrapper[5123]: W1212 15:33:00.294451 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc081ef_e1e7_4976_8234_fe6a1929df17.slice/crio-997e9272bf079d9c53aa2c6b5048db7cebb97e42d4bc6e5add5add49538ee5a1 WatchSource:0}: Error finding container 997e9272bf079d9c53aa2c6b5048db7cebb97e42d4bc6e5add5add49538ee5a1: Status 404 returned error can't find the container with id 997e9272bf079d9c53aa2c6b5048db7cebb97e42d4bc6e5add5add49538ee5a1 Dec 12 15:33:00 crc kubenswrapper[5123]: I1212 15:33:00.362250 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7df86779b6-jgqwz"] Dec 12 15:33:00 crc kubenswrapper[5123]: W1212 15:33:00.397434 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f41d4bb_7885_4478_b1e3_31744af98ede.slice/crio-9801deef6f24e6afaa77a32aa6668591ea474b84a145eca9c3e7bb85f47c8c8f WatchSource:0}: Error finding container 9801deef6f24e6afaa77a32aa6668591ea474b84a145eca9c3e7bb85f47c8c8f: Status 404 returned error can't find the container with id 9801deef6f24e6afaa77a32aa6668591ea474b84a145eca9c3e7bb85f47c8c8f Dec 12 15:33:00 crc kubenswrapper[5123]: I1212 15:33:00.791819 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7ms4" event={"ID":"dbc081ef-e1e7-4976-8234-fe6a1929df17","Type":"ContainerStarted","Data":"997e9272bf079d9c53aa2c6b5048db7cebb97e42d4bc6e5add5add49538ee5a1"} Dec 12 15:33:00 crc kubenswrapper[5123]: I1212 15:33:00.804859 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" event={"ID":"8f41d4bb-7885-4478-b1e3-31744af98ede","Type":"ContainerStarted","Data":"9801deef6f24e6afaa77a32aa6668591ea474b84a145eca9c3e7bb85f47c8c8f"} Dec 12 15:33:01 crc kubenswrapper[5123]: I1212 15:33:01.855528 5123 generic.go:358] "Generic (PLEG): container finished" podID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerID="a7612ad1fe34b88eda5a40614cfa57f28ca38100462b2a987313a626aa076ec3" exitCode=0 Dec 12 15:33:01 crc kubenswrapper[5123]: I1212 15:33:01.855879 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7ms4" event={"ID":"dbc081ef-e1e7-4976-8234-fe6a1929df17","Type":"ContainerDied","Data":"a7612ad1fe34b88eda5a40614cfa57f28ca38100462b2a987313a626aa076ec3"} Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.037334 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzcpb"] Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.037974 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gzcpb" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="registry-server" containerID="cri-o://c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2" gracePeriod=2 Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.827447 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.883817 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7ms4" event={"ID":"dbc081ef-e1e7-4976-8234-fe6a1929df17","Type":"ContainerStarted","Data":"7be076a5e6a23afcc4c37942ab385d48a74f0e883b665274dca6005243318eaf"} Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.910611 5123 generic.go:358] "Generic (PLEG): container finished" podID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerID="c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2" exitCode=0 Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.910721 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzcpb" event={"ID":"22184a26-4a56-48c5-9e60-51dcd636efcf","Type":"ContainerDied","Data":"c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2"} Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.910766 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzcpb" event={"ID":"22184a26-4a56-48c5-9e60-51dcd636efcf","Type":"ContainerDied","Data":"19a162bf9d0f04615c8996d463f82a51d6ad2f8420b7e23250acdbcbcb76004c"} Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.910777 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-utilities\") pod \"22184a26-4a56-48c5-9e60-51dcd636efcf\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.910921 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzcpb" Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.910791 5123 scope.go:117] "RemoveContainer" containerID="c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2" Dec 12 15:33:03 crc kubenswrapper[5123]: I1212 15:33:03.912664 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-utilities" (OuterVolumeSpecName: "utilities") pod "22184a26-4a56-48c5-9e60-51dcd636efcf" (UID: "22184a26-4a56-48c5-9e60-51dcd636efcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.012731 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7vhh\" (UniqueName: \"kubernetes.io/projected/22184a26-4a56-48c5-9e60-51dcd636efcf-kube-api-access-n7vhh\") pod \"22184a26-4a56-48c5-9e60-51dcd636efcf\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.012867 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-catalog-content\") pod \"22184a26-4a56-48c5-9e60-51dcd636efcf\" (UID: \"22184a26-4a56-48c5-9e60-51dcd636efcf\") " Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.024410 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22184a26-4a56-48c5-9e60-51dcd636efcf-kube-api-access-n7vhh" (OuterVolumeSpecName: "kube-api-access-n7vhh") pod "22184a26-4a56-48c5-9e60-51dcd636efcf" (UID: "22184a26-4a56-48c5-9e60-51dcd636efcf"). InnerVolumeSpecName "kube-api-access-n7vhh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.034752 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.034809 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7vhh\" (UniqueName: \"kubernetes.io/projected/22184a26-4a56-48c5-9e60-51dcd636efcf-kube-api-access-n7vhh\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.167815 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22184a26-4a56-48c5-9e60-51dcd636efcf" (UID: "22184a26-4a56-48c5-9e60-51dcd636efcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.236976 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22184a26-4a56-48c5-9e60-51dcd636efcf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.321302 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzcpb"] Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.324640 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gzcpb"] Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.936975 5123 generic.go:358] "Generic (PLEG): container finished" podID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerID="7be076a5e6a23afcc4c37942ab385d48a74f0e883b665274dca6005243318eaf" exitCode=0 Dec 12 15:33:04 crc kubenswrapper[5123]: I1212 15:33:04.937652 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7ms4" event={"ID":"dbc081ef-e1e7-4976-8234-fe6a1929df17","Type":"ContainerDied","Data":"7be076a5e6a23afcc4c37942ab385d48a74f0e883b665274dca6005243318eaf"} Dec 12 15:33:05 crc kubenswrapper[5123]: I1212 15:33:05.672912 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" path="/var/lib/kubelet/pods/22184a26-4a56-48c5-9e60-51dcd636efcf/volumes" Dec 12 15:33:06 crc kubenswrapper[5123]: I1212 15:33:06.437196 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:33:06 crc kubenswrapper[5123]: I1212 15:33:06.510432 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:33:10 crc kubenswrapper[5123]: I1212 15:33:10.624575 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtzqw"] Dec 12 15:33:10 crc kubenswrapper[5123]: I1212 15:33:10.626392 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rtzqw" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="registry-server" containerID="cri-o://e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" gracePeriod=2 Dec 12 15:33:11 crc kubenswrapper[5123]: I1212 15:33:11.100044 5123 generic.go:358] "Generic (PLEG): container finished" podID="980b7482-44df-44b6-933e-085997e6ac3d" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" exitCode=0 Dec 12 15:33:11 crc kubenswrapper[5123]: I1212 15:33:11.100128 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerDied","Data":"e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2"} Dec 12 15:33:16 crc kubenswrapper[5123]: E1212 15:33:16.441241 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:16 crc kubenswrapper[5123]: E1212 15:33:16.443470 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:16 crc kubenswrapper[5123]: E1212 15:33:16.445469 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:16 crc kubenswrapper[5123]: E1212 15:33:16.445618 5123 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rtzqw" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="registry-server" probeResult="unknown" Dec 12 15:33:25 crc kubenswrapper[5123]: I1212 15:33:25.128527 5123 scope.go:117] "RemoveContainer" containerID="4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269" Dec 12 15:33:26 crc kubenswrapper[5123]: E1212 15:33:26.447252 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:26 crc kubenswrapper[5123]: E1212 15:33:26.448635 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:26 crc kubenswrapper[5123]: E1212 15:33:26.449607 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:26 crc kubenswrapper[5123]: E1212 15:33:26.449690 5123 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rtzqw" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="registry-server" probeResult="unknown" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.764544 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.785402 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-utilities\") pod \"980b7482-44df-44b6-933e-085997e6ac3d\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.785493 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxprx\" (UniqueName: \"kubernetes.io/projected/980b7482-44df-44b6-933e-085997e6ac3d-kube-api-access-bxprx\") pod \"980b7482-44df-44b6-933e-085997e6ac3d\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.785634 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-catalog-content\") pod \"980b7482-44df-44b6-933e-085997e6ac3d\" (UID: \"980b7482-44df-44b6-933e-085997e6ac3d\") " Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.797286 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-utilities" (OuterVolumeSpecName: "utilities") pod "980b7482-44df-44b6-933e-085997e6ac3d" (UID: "980b7482-44df-44b6-933e-085997e6ac3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.804147 5123 scope.go:117] "RemoveContainer" containerID="f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.807876 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980b7482-44df-44b6-933e-085997e6ac3d-kube-api-access-bxprx" (OuterVolumeSpecName: "kube-api-access-bxprx") pod "980b7482-44df-44b6-933e-085997e6ac3d" (UID: "980b7482-44df-44b6-933e-085997e6ac3d"). InnerVolumeSpecName "kube-api-access-bxprx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.866188 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "980b7482-44df-44b6-933e-085997e6ac3d" (UID: "980b7482-44df-44b6-933e-085997e6ac3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.869397 5123 scope.go:117] "RemoveContainer" containerID="c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2" Dec 12 15:33:35 crc kubenswrapper[5123]: E1212 15:33:35.870329 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2\": container with ID starting with c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2 not found: ID does not exist" containerID="c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.870410 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2"} err="failed to get container status \"c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2\": rpc error: code = NotFound desc = could not find container \"c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2\": container with ID starting with c5184eacb3b1a67348273a07851eb8d97255588d4906101766e912356d126cb2 not found: ID does not exist" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.870444 5123 scope.go:117] "RemoveContainer" containerID="4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269" Dec 12 15:33:35 crc kubenswrapper[5123]: E1212 15:33:35.870797 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269\": container with ID starting with 4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269 not found: ID does not exist" containerID="4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.870839 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269"} err="failed to get container status \"4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269\": rpc error: code = NotFound desc = could not find container \"4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269\": container with ID starting with 4502810cf242ad7fa2c02c7fad2b4de8e92bdbd6d585143f7a1453d672ba2269 not found: ID does not exist" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.870858 5123 scope.go:117] "RemoveContainer" containerID="f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15" Dec 12 15:33:35 crc kubenswrapper[5123]: E1212 15:33:35.871388 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15\": container with ID starting with f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15 not found: ID does not exist" containerID="f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.871456 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15"} err="failed to get container status \"f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15\": rpc error: code = NotFound desc = could not find container \"f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15\": container with ID starting with f9acb03e22b71572347648ced6a8c0245cde8084ca0f7d6c561e5db8abd63b15 not found: ID does not exist" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.929594 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.929639 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxprx\" (UniqueName: \"kubernetes.io/projected/980b7482-44df-44b6-933e-085997e6ac3d-kube-api-access-bxprx\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:35 crc kubenswrapper[5123]: I1212 15:33:35.929656 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7482-44df-44b6-933e-085997e6ac3d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.652681 5123 generic.go:358] "Generic (PLEG): container finished" podID="bd877609-a269-4a6f-a64d-d671332d8496" containerID="00f120c2676173f57eccc003c445ec920aa4116c5fb020b28c193cee2c2cc395" exitCode=0 Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.652766 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" event={"ID":"bd877609-a269-4a6f-a64d-d671332d8496","Type":"ContainerDied","Data":"00f120c2676173f57eccc003c445ec920aa4116c5fb020b28c193cee2c2cc395"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.656631 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-6g959" event={"ID":"860be1ea-57ae-4773-a62e-871c9365127a","Type":"ContainerStarted","Data":"261b9a9951503df3b22897d65df2b5608f73070b75aea22ed8bd0c0390209d0e"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.657601 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.665510 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtzqw" event={"ID":"980b7482-44df-44b6-933e-085997e6ac3d","Type":"ContainerDied","Data":"23b411d703f3eaaff4a72b858c8fa204241cb7952deb85e49ba6aa8fa591751b"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.665593 5123 scope.go:117] "RemoveContainer" containerID="e9a68ac36ebfa21fd94fce2a37058f9e8237881363f308f6fb77f0022f5f45a2" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.665811 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtzqw" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.674192 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" event={"ID":"266da63e-0a12-457d-a587-e5e6857fbee0","Type":"ContainerStarted","Data":"90d6b205fc9449e051c74e4f9704a93bf89bf009b03ed58cc61be738d4e8ed51"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.674694 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.677376 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" event={"ID":"28c13967-6f73-4e0c-885b-686531415517","Type":"ContainerStarted","Data":"736b9c23c4fa0065315ccd09af95c5608bc6482394d44b67472c7cb5c0b1c742"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.684431 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7ms4" event={"ID":"dbc081ef-e1e7-4976-8234-fe6a1929df17","Type":"ContainerStarted","Data":"e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.696384 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" event={"ID":"8f41d4bb-7885-4478-b1e3-31744af98ede","Type":"ContainerStarted","Data":"2c582a8cda3194cbad61a688eb1d8e130efa9d07257d2bfc07384002555ac61a"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.708250 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" event={"ID":"4bf98e51-d2f0-43b3-ba71-e34be352c480","Type":"ContainerStarted","Data":"3de725b7be7154c1ae73bd131a1bf07de586baf8243e35330dc3d927ab58e0be"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.717136 5123 scope.go:117] "RemoveContainer" containerID="732f9febfd30fe500bb732c751be907d56053268e2814f0c4fa44ac95161b65f" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.722325 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-6g959" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.723074 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-pk7w7" podStartSLOduration=3.506988739 podStartE2EDuration="43.723051211s" podCreationTimestamp="2025-12-12 15:32:53 +0000 UTC" firstStartedPulling="2025-12-12 15:32:55.654928387 +0000 UTC m=+804.464880898" lastFinishedPulling="2025-12-12 15:33:35.870990859 +0000 UTC m=+844.680943370" observedRunningTime="2025-12-12 15:33:36.708735694 +0000 UTC m=+845.518688205" watchObservedRunningTime="2025-12-12 15:33:36.723051211 +0000 UTC m=+845.533003722" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.733881 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" event={"ID":"8348759e-72f9-43ee-b572-437af5053bf6","Type":"ContainerStarted","Data":"83925b3625746d9894e1d098ed498c4d15bab57520807eb1a9deed9eed573689"} Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.929568 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" podStartSLOduration=4.404304072 podStartE2EDuration="43.92954207s" podCreationTimestamp="2025-12-12 15:32:53 +0000 UTC" firstStartedPulling="2025-12-12 15:32:56.344829862 +0000 UTC m=+805.154782373" lastFinishedPulling="2025-12-12 15:33:35.87006786 +0000 UTC m=+844.680020371" observedRunningTime="2025-12-12 15:33:36.92482473 +0000 UTC m=+845.734777251" watchObservedRunningTime="2025-12-12 15:33:36.92954207 +0000 UTC m=+845.739494591" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.962239 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-6g959" podStartSLOduration=3.642227386 podStartE2EDuration="43.962189823s" podCreationTimestamp="2025-12-12 15:32:53 +0000 UTC" firstStartedPulling="2025-12-12 15:32:55.548410378 +0000 UTC m=+804.358362889" lastFinishedPulling="2025-12-12 15:33:35.868372815 +0000 UTC m=+844.678325326" observedRunningTime="2025-12-12 15:33:36.956182911 +0000 UTC m=+845.766135442" watchObservedRunningTime="2025-12-12 15:33:36.962189823 +0000 UTC m=+845.772142334" Dec 12 15:33:36 crc kubenswrapper[5123]: I1212 15:33:36.973578 5123 scope.go:117] "RemoveContainer" containerID="2af96c167d6fedf0eb40a42dda74a11bd6e5231a31202e941e10a4704d84d22f" Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.044567 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7df86779b6-jgqwz" podStartSLOduration=4.62165108 podStartE2EDuration="40.04454372s" podCreationTimestamp="2025-12-12 15:32:57 +0000 UTC" firstStartedPulling="2025-12-12 15:33:00.425103026 +0000 UTC m=+809.235055537" lastFinishedPulling="2025-12-12 15:33:35.847995656 +0000 UTC m=+844.657948177" observedRunningTime="2025-12-12 15:33:37.038461806 +0000 UTC m=+845.848414337" watchObservedRunningTime="2025-12-12 15:33:37.04454372 +0000 UTC m=+845.854496231" Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.106066 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-fbf4cc9d8-7zmgd" podStartSLOduration=3.228319536 podStartE2EDuration="44.106043993s" podCreationTimestamp="2025-12-12 15:32:53 +0000 UTC" firstStartedPulling="2025-12-12 15:32:55.098669836 +0000 UTC m=+803.908622357" lastFinishedPulling="2025-12-12 15:33:35.976394303 +0000 UTC m=+844.786346814" observedRunningTime="2025-12-12 15:33:37.10499119 +0000 UTC m=+845.914943701" watchObservedRunningTime="2025-12-12 15:33:37.106043993 +0000 UTC m=+845.915996504" Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.106353 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-lldf4" podStartSLOduration=3.359021738 podStartE2EDuration="44.106343953s" podCreationTimestamp="2025-12-12 15:32:53 +0000 UTC" firstStartedPulling="2025-12-12 15:32:55.099255865 +0000 UTC m=+803.909208376" lastFinishedPulling="2025-12-12 15:33:35.84657808 +0000 UTC m=+844.656530591" observedRunningTime="2025-12-12 15:33:37.079127904 +0000 UTC m=+845.889080445" watchObservedRunningTime="2025-12-12 15:33:37.106343953 +0000 UTC m=+845.916296464" Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.137699 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtzqw"] Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.138807 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rtzqw"] Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.268776 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c7ms4" podStartSLOduration=38.191384304 podStartE2EDuration="39.268741085s" podCreationTimestamp="2025-12-12 15:32:58 +0000 UTC" firstStartedPulling="2025-12-12 15:33:01.856712812 +0000 UTC m=+810.666665323" lastFinishedPulling="2025-12-12 15:33:02.934069593 +0000 UTC m=+811.744022104" observedRunningTime="2025-12-12 15:33:37.159444257 +0000 UTC m=+845.969396788" watchObservedRunningTime="2025-12-12 15:33:37.268741085 +0000 UTC m=+846.078693596" Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.649113 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980b7482-44df-44b6-933e-085997e6ac3d" path="/var/lib/kubelet/pods/980b7482-44df-44b6-933e-085997e6ac3d/volumes" Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.747034 5123 generic.go:358] "Generic (PLEG): container finished" podID="bd877609-a269-4a6f-a64d-d671332d8496" containerID="b9d15791ff49cc2ebf916088bd2eb63b0a4bf61fae063807307507ea54d8e782" exitCode=0 Dec 12 15:33:37 crc kubenswrapper[5123]: I1212 15:33:37.747262 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" event={"ID":"bd877609-a269-4a6f-a64d-d671332d8496","Type":"ContainerDied","Data":"b9d15791ff49cc2ebf916088bd2eb63b0a4bf61fae063807307507ea54d8e782"} Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.108376 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110082 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="registry-server" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110121 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="registry-server" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110136 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="registry-server" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110145 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="registry-server" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110163 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="extract-content" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110171 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="extract-content" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110183 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="extract-utilities" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110191 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="extract-utilities" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110273 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="extract-utilities" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110284 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="extract-utilities" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110295 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="extract-content" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110302 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="extract-content" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110535 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="22184a26-4a56-48c5-9e60-51dcd636efcf" containerName="registry-server" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.110553 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="980b7482-44df-44b6-933e-085997e6ac3d" containerName="registry-server" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.122845 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.131569 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.131641 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.137000 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.137505 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.137886 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.140603 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-np4w9\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.140878 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.141079 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.141353 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.179366 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209182 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209304 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209340 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209372 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209479 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209507 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209532 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209553 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209585 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209620 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209663 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209733 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209805 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209916 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.209934 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.324534 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.324682 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.324870 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.324923 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.325586 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.325839 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.326180 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.326528 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.326828 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.326902 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.327026 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.327114 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.544245 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.552092 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.555513 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.563133 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.563445 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.563497 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.563644 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.563694 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.563730 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.563843 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.577085 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.579861 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.580588 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.581362 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.581480 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.585782 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.590965 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.591393 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.802234 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.867014 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:33:38 crc kubenswrapper[5123]: I1212 15:33:38.867299 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.196542 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.212820 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-bundle\") pod \"bd877609-a269-4a6f-a64d-d671332d8496\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.212928 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7bkb\" (UniqueName: \"kubernetes.io/projected/bd877609-a269-4a6f-a64d-d671332d8496-kube-api-access-z7bkb\") pod \"bd877609-a269-4a6f-a64d-d671332d8496\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.213269 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-util\") pod \"bd877609-a269-4a6f-a64d-d671332d8496\" (UID: \"bd877609-a269-4a6f-a64d-d671332d8496\") " Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.215116 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-bundle" (OuterVolumeSpecName: "bundle") pod "bd877609-a269-4a6f-a64d-d671332d8496" (UID: "bd877609-a269-4a6f-a64d-d671332d8496"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.226299 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd877609-a269-4a6f-a64d-d671332d8496-kube-api-access-z7bkb" (OuterVolumeSpecName: "kube-api-access-z7bkb") pod "bd877609-a269-4a6f-a64d-d671332d8496" (UID: "bd877609-a269-4a6f-a64d-d671332d8496"). InnerVolumeSpecName "kube-api-access-z7bkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.239249 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-util" (OuterVolumeSpecName: "util") pod "bd877609-a269-4a6f-a64d-d671332d8496" (UID: "bd877609-a269-4a6f-a64d-d671332d8496"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.312927 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.316336 5123 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.316405 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7bkb\" (UniqueName: \"kubernetes.io/projected/bd877609-a269-4a6f-a64d-d671332d8496-kube-api-access-z7bkb\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.316420 5123 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd877609-a269-4a6f-a64d-d671332d8496-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.819787 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" event={"ID":"bd877609-a269-4a6f-a64d-d671332d8496","Type":"ContainerDied","Data":"5479ad2a90467f118d74d9736420bb6016306530c14aed506d3697be7226266a"} Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.819864 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a52zfd" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.819936 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5479ad2a90467f118d74d9736420bb6016306530c14aed506d3697be7226266a" Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.821943 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5","Type":"ContainerStarted","Data":"4f712a22f3d151b37cac82f2527ff70b746855d5013621265e9802f49abafd30"} Dec 12 15:33:39 crc kubenswrapper[5123]: I1212 15:33:39.916508 5123 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c7ms4" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="registry-server" probeResult="failure" output=< Dec 12 15:33:39 crc kubenswrapper[5123]: timeout: failed to connect service ":50051" within 1s Dec 12 15:33:39 crc kubenswrapper[5123]: > Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.802106 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-n2dxz" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.841274 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9"] Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.843093 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd877609-a269-4a6f-a64d-d671332d8496" containerName="extract" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.843127 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd877609-a269-4a6f-a64d-d671332d8496" containerName="extract" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.843205 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd877609-a269-4a6f-a64d-d671332d8496" containerName="pull" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.843235 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd877609-a269-4a6f-a64d-d671332d8496" containerName="pull" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.843273 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd877609-a269-4a6f-a64d-d671332d8496" containerName="util" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.843283 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd877609-a269-4a6f-a64d-d671332d8496" containerName="util" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.843839 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="bd877609-a269-4a6f-a64d-d671332d8496" containerName="extract" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.878784 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.883045 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.889661 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.890073 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-qq5p4\"" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.894274 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9"] Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.968354 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqw6\" (UniqueName: \"kubernetes.io/projected/c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a-kube-api-access-fsqw6\") pod \"cert-manager-operator-controller-manager-64c74584c4-9h4f9\" (UID: \"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:47 crc kubenswrapper[5123]: I1212 15:33:47.968503 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9h4f9\" (UID: \"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.069821 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fsqw6\" (UniqueName: \"kubernetes.io/projected/c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a-kube-api-access-fsqw6\") pod \"cert-manager-operator-controller-manager-64c74584c4-9h4f9\" (UID: \"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.070115 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9h4f9\" (UID: \"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.070935 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9h4f9\" (UID: \"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.108491 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsqw6\" (UniqueName: \"kubernetes.io/projected/c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a-kube-api-access-fsqw6\") pod \"cert-manager-operator-controller-manager-64c74584c4-9h4f9\" (UID: \"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.231161 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.646927 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-29pnj"] Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.661784 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-29pnj" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.666230 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-pml7j\"" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.670760 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-29pnj"] Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.745927 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6fcr\" (UniqueName: \"kubernetes.io/projected/c330d8e9-b1e5-4741-985c-93516313a586-kube-api-access-h6fcr\") pod \"infrawatch-operators-29pnj\" (UID: \"c330d8e9-b1e5-4741-985c-93516313a586\") " pod="service-telemetry/infrawatch-operators-29pnj" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.847477 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h6fcr\" (UniqueName: \"kubernetes.io/projected/c330d8e9-b1e5-4741-985c-93516313a586-kube-api-access-h6fcr\") pod \"infrawatch-operators-29pnj\" (UID: \"c330d8e9-b1e5-4741-985c-93516313a586\") " pod="service-telemetry/infrawatch-operators-29pnj" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.888436 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6fcr\" (UniqueName: \"kubernetes.io/projected/c330d8e9-b1e5-4741-985c-93516313a586-kube-api-access-h6fcr\") pod \"infrawatch-operators-29pnj\" (UID: \"c330d8e9-b1e5-4741-985c-93516313a586\") " pod="service-telemetry/infrawatch-operators-29pnj" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.950650 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.951186 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9"] Dec 12 15:33:48 crc kubenswrapper[5123]: W1212 15:33:48.974401 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fb0bc0_99c8_4524_b12a_0cb608bbcd1a.slice/crio-68587f8a69b0301e7007ff597266f31ca59b5fd323fa8dbe351b2d76d66e545f WatchSource:0}: Error finding container 68587f8a69b0301e7007ff597266f31ca59b5fd323fa8dbe351b2d76d66e545f: Status 404 returned error can't find the container with id 68587f8a69b0301e7007ff597266f31ca59b5fd323fa8dbe351b2d76d66e545f Dec 12 15:33:48 crc kubenswrapper[5123]: I1212 15:33:48.999170 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:33:49 crc kubenswrapper[5123]: I1212 15:33:49.012379 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-29pnj" Dec 12 15:33:49 crc kubenswrapper[5123]: I1212 15:33:49.657757 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-29pnj"] Dec 12 15:33:49 crc kubenswrapper[5123]: W1212 15:33:49.675090 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc330d8e9_b1e5_4741_985c_93516313a586.slice/crio-dc84e21015c88dc42ce79aba14fafe35e14c2534b42ec9df803c32e35d40f19a WatchSource:0}: Error finding container dc84e21015c88dc42ce79aba14fafe35e14c2534b42ec9df803c32e35d40f19a: Status 404 returned error can't find the container with id dc84e21015c88dc42ce79aba14fafe35e14c2534b42ec9df803c32e35d40f19a Dec 12 15:33:49 crc kubenswrapper[5123]: I1212 15:33:49.961142 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" event={"ID":"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a","Type":"ContainerStarted","Data":"68587f8a69b0301e7007ff597266f31ca59b5fd323fa8dbe351b2d76d66e545f"} Dec 12 15:33:49 crc kubenswrapper[5123]: I1212 15:33:49.974515 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-29pnj" event={"ID":"c330d8e9-b1e5-4741-985c-93516313a586","Type":"ContainerStarted","Data":"dc84e21015c88dc42ce79aba14fafe35e14c2534b42ec9df803c32e35d40f19a"} Dec 12 15:33:53 crc kubenswrapper[5123]: I1212 15:33:53.625726 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c7ms4"] Dec 12 15:33:53 crc kubenswrapper[5123]: I1212 15:33:53.626133 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c7ms4" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="registry-server" containerID="cri-o://e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f" gracePeriod=2 Dec 12 15:33:54 crc kubenswrapper[5123]: I1212 15:33:54.632811 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-29pnj"] Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.077980 5123 generic.go:358] "Generic (PLEG): container finished" podID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerID="e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f" exitCode=0 Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.078402 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7ms4" event={"ID":"dbc081ef-e1e7-4976-8234-fe6a1929df17","Type":"ContainerDied","Data":"e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f"} Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.441460 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-7p94m"] Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.460616 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-7p94m"] Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.460857 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.471567 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5chkn\" (UniqueName: \"kubernetes.io/projected/0be8f067-cf67-4921-9626-997c8c266697-kube-api-access-5chkn\") pod \"infrawatch-operators-7p94m\" (UID: \"0be8f067-cf67-4921-9626-997c8c266697\") " pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.573533 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5chkn\" (UniqueName: \"kubernetes.io/projected/0be8f067-cf67-4921-9626-997c8c266697-kube-api-access-5chkn\") pod \"infrawatch-operators-7p94m\" (UID: \"0be8f067-cf67-4921-9626-997c8c266697\") " pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.606462 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5chkn\" (UniqueName: \"kubernetes.io/projected/0be8f067-cf67-4921-9626-997c8c266697-kube-api-access-5chkn\") pod \"infrawatch-operators-7p94m\" (UID: \"0be8f067-cf67-4921-9626-997c8c266697\") " pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:33:55 crc kubenswrapper[5123]: I1212 15:33:55.782629 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:33:58 crc kubenswrapper[5123]: E1212 15:33:58.953095 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f is running failed: container process not found" containerID="e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:58 crc kubenswrapper[5123]: E1212 15:33:58.956049 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f is running failed: container process not found" containerID="e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:58 crc kubenswrapper[5123]: E1212 15:33:58.956765 5123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f is running failed: container process not found" containerID="e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:33:58 crc kubenswrapper[5123]: E1212 15:33:58.956805 5123 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-c7ms4" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="registry-server" probeResult="unknown" Dec 12 15:34:00 crc kubenswrapper[5123]: I1212 15:34:00.902126 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:34:00 crc kubenswrapper[5123]: I1212 15:34:00.902241 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.254774 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.324405 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-utilities\") pod \"dbc081ef-e1e7-4976-8234-fe6a1929df17\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.324566 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhbts\" (UniqueName: \"kubernetes.io/projected/dbc081ef-e1e7-4976-8234-fe6a1929df17-kube-api-access-nhbts\") pod \"dbc081ef-e1e7-4976-8234-fe6a1929df17\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.324810 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-catalog-content\") pod \"dbc081ef-e1e7-4976-8234-fe6a1929df17\" (UID: \"dbc081ef-e1e7-4976-8234-fe6a1929df17\") " Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.328088 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-utilities" (OuterVolumeSpecName: "utilities") pod "dbc081ef-e1e7-4976-8234-fe6a1929df17" (UID: "dbc081ef-e1e7-4976-8234-fe6a1929df17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.353724 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc081ef-e1e7-4976-8234-fe6a1929df17-kube-api-access-nhbts" (OuterVolumeSpecName: "kube-api-access-nhbts") pod "dbc081ef-e1e7-4976-8234-fe6a1929df17" (UID: "dbc081ef-e1e7-4976-8234-fe6a1929df17"). InnerVolumeSpecName "kube-api-access-nhbts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.403788 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbc081ef-e1e7-4976-8234-fe6a1929df17" (UID: "dbc081ef-e1e7-4976-8234-fe6a1929df17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.426918 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhbts\" (UniqueName: \"kubernetes.io/projected/dbc081ef-e1e7-4976-8234-fe6a1929df17-kube-api-access-nhbts\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.426973 5123 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:05 crc kubenswrapper[5123]: I1212 15:34:05.426985 5123 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbc081ef-e1e7-4976-8234-fe6a1929df17-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:06 crc kubenswrapper[5123]: I1212 15:34:06.169061 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c7ms4" Dec 12 15:34:06 crc kubenswrapper[5123]: I1212 15:34:06.169171 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c7ms4" event={"ID":"dbc081ef-e1e7-4976-8234-fe6a1929df17","Type":"ContainerDied","Data":"997e9272bf079d9c53aa2c6b5048db7cebb97e42d4bc6e5add5add49538ee5a1"} Dec 12 15:34:06 crc kubenswrapper[5123]: I1212 15:34:06.169392 5123 scope.go:117] "RemoveContainer" containerID="e6f54891f28504ad1003305510b9442e611a7a378cb64d9134c2819642d6f46f" Dec 12 15:34:06 crc kubenswrapper[5123]: I1212 15:34:06.189302 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c7ms4"] Dec 12 15:34:06 crc kubenswrapper[5123]: I1212 15:34:06.194923 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c7ms4"] Dec 12 15:34:07 crc kubenswrapper[5123]: I1212 15:34:07.650561 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" path="/var/lib/kubelet/pods/dbc081ef-e1e7-4976-8234-fe6a1929df17/volumes" Dec 12 15:34:10 crc kubenswrapper[5123]: I1212 15:34:09.991588 5123 scope.go:117] "RemoveContainer" containerID="7be076a5e6a23afcc4c37942ab385d48a74f0e883b665274dca6005243318eaf" Dec 12 15:34:10 crc kubenswrapper[5123]: I1212 15:34:10.256497 5123 scope.go:117] "RemoveContainer" containerID="a7612ad1fe34b88eda5a40614cfa57f28ca38100462b2a987313a626aa076ec3" Dec 12 15:34:10 crc kubenswrapper[5123]: I1212 15:34:10.560477 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-7p94m"] Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.252074 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" event={"ID":"c3fb0bc0-99c8-4524-b12a-0cb608bbcd1a","Type":"ContainerStarted","Data":"f577bb53275cb0d2712df07d3adc6418f5bc2e3e16f6fe1112c8068adabfe066"} Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.254928 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5","Type":"ContainerStarted","Data":"d01d0b2ea18ea59309c427e4569e8a5af89aac24544aae7531a0747ae4e82461"} Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.258493 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-29pnj" event={"ID":"c330d8e9-b1e5-4741-985c-93516313a586","Type":"ContainerStarted","Data":"d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c"} Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.258884 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-29pnj" podUID="c330d8e9-b1e5-4741-985c-93516313a586" containerName="registry-server" containerID="cri-o://d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c" gracePeriod=2 Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.260431 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-7p94m" event={"ID":"0be8f067-cf67-4921-9626-997c8c266697","Type":"ContainerStarted","Data":"df699549b0f06b8f6bb280b2d2c1a0e9bb6bcd4e4c9224001da8ca9b346d01d5"} Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.283034 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9h4f9" podStartSLOduration=8.077810834 podStartE2EDuration="24.283000952s" podCreationTimestamp="2025-12-12 15:33:47 +0000 UTC" firstStartedPulling="2025-12-12 15:33:48.97926341 +0000 UTC m=+857.789215921" lastFinishedPulling="2025-12-12 15:34:05.184453528 +0000 UTC m=+873.994406039" observedRunningTime="2025-12-12 15:34:11.27880522 +0000 UTC m=+880.088757741" watchObservedRunningTime="2025-12-12 15:34:11.283000952 +0000 UTC m=+880.092953463" Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.355180 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-29pnj" podStartSLOduration=3.072616435 podStartE2EDuration="23.355150901s" podCreationTimestamp="2025-12-12 15:33:48 +0000 UTC" firstStartedPulling="2025-12-12 15:33:49.68432254 +0000 UTC m=+858.494275051" lastFinishedPulling="2025-12-12 15:34:09.966857006 +0000 UTC m=+878.776809517" observedRunningTime="2025-12-12 15:34:11.350118412 +0000 UTC m=+880.160070923" watchObservedRunningTime="2025-12-12 15:34:11.355150901 +0000 UTC m=+880.165103412" Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.530686 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.574787 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.699855 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-29pnj" Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.780192 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6fcr\" (UniqueName: \"kubernetes.io/projected/c330d8e9-b1e5-4741-985c-93516313a586-kube-api-access-h6fcr\") pod \"c330d8e9-b1e5-4741-985c-93516313a586\" (UID: \"c330d8e9-b1e5-4741-985c-93516313a586\") " Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.908929 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c330d8e9-b1e5-4741-985c-93516313a586-kube-api-access-h6fcr" (OuterVolumeSpecName: "kube-api-access-h6fcr") pod "c330d8e9-b1e5-4741-985c-93516313a586" (UID: "c330d8e9-b1e5-4741-985c-93516313a586"). InnerVolumeSpecName "kube-api-access-h6fcr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:34:11 crc kubenswrapper[5123]: I1212 15:34:11.992459 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h6fcr\" (UniqueName: \"kubernetes.io/projected/c330d8e9-b1e5-4741-985c-93516313a586-kube-api-access-h6fcr\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.269914 5123 generic.go:358] "Generic (PLEG): container finished" podID="c330d8e9-b1e5-4741-985c-93516313a586" containerID="d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c" exitCode=0 Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.270074 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-29pnj" Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.270116 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-29pnj" event={"ID":"c330d8e9-b1e5-4741-985c-93516313a586","Type":"ContainerDied","Data":"d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c"} Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.270159 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-29pnj" event={"ID":"c330d8e9-b1e5-4741-985c-93516313a586","Type":"ContainerDied","Data":"dc84e21015c88dc42ce79aba14fafe35e14c2534b42ec9df803c32e35d40f19a"} Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.270185 5123 scope.go:117] "RemoveContainer" containerID="d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c" Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.272458 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-7p94m" event={"ID":"0be8f067-cf67-4921-9626-997c8c266697","Type":"ContainerStarted","Data":"24a57570b74573f56fd71e062b44129daac78f1b53330df9ec6717d13f8e7eb6"} Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.293660 5123 scope.go:117] "RemoveContainer" containerID="d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c" Dec 12 15:34:12 crc kubenswrapper[5123]: E1212 15:34:12.295195 5123 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c\": container with ID starting with d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c not found: ID does not exist" containerID="d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c" Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.295346 5123 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c"} err="failed to get container status \"d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c\": rpc error: code = NotFound desc = could not find container \"d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c\": container with ID starting with d81f61e766421fa9469fdc2301a996e68015f5208dfa64809250b5cef70a754c not found: ID does not exist" Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.319774 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-7p94m" podStartSLOduration=16.635407904 podStartE2EDuration="17.319753095s" podCreationTimestamp="2025-12-12 15:33:55 +0000 UTC" firstStartedPulling="2025-12-12 15:34:10.5617499 +0000 UTC m=+879.371702411" lastFinishedPulling="2025-12-12 15:34:11.246095091 +0000 UTC m=+880.056047602" observedRunningTime="2025-12-12 15:34:12.302385154 +0000 UTC m=+881.112337665" watchObservedRunningTime="2025-12-12 15:34:12.319753095 +0000 UTC m=+881.129705606" Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.323576 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-29pnj"] Dec 12 15:34:12 crc kubenswrapper[5123]: I1212 15:34:12.330516 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-29pnj"] Dec 12 15:34:13 crc kubenswrapper[5123]: I1212 15:34:13.304461 5123 generic.go:358] "Generic (PLEG): container finished" podID="a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5" containerID="d01d0b2ea18ea59309c427e4569e8a5af89aac24544aae7531a0747ae4e82461" exitCode=0 Dec 12 15:34:13 crc kubenswrapper[5123]: I1212 15:34:13.304590 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5","Type":"ContainerDied","Data":"d01d0b2ea18ea59309c427e4569e8a5af89aac24544aae7531a0747ae4e82461"} Dec 12 15:34:13 crc kubenswrapper[5123]: I1212 15:34:13.649457 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c330d8e9-b1e5-4741-985c-93516313a586" path="/var/lib/kubelet/pods/c330d8e9-b1e5-4741-985c-93516313a586/volumes" Dec 12 15:34:14 crc kubenswrapper[5123]: I1212 15:34:14.317122 5123 generic.go:358] "Generic (PLEG): container finished" podID="a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5" containerID="b687d87d4db79ef59c79086be87b04c88bb7ccdc27ef39aae149766162d9f0fc" exitCode=0 Dec 12 15:34:14 crc kubenswrapper[5123]: I1212 15:34:14.317177 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5","Type":"ContainerDied","Data":"b687d87d4db79ef59c79086be87b04c88bb7ccdc27ef39aae149766162d9f0fc"} Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.222079 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2"] Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223333 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="extract-content" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223362 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="extract-content" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223390 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c330d8e9-b1e5-4741-985c-93516313a586" containerName="registry-server" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223398 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="c330d8e9-b1e5-4741-985c-93516313a586" containerName="registry-server" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223413 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="extract-utilities" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223420 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="extract-utilities" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223434 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="registry-server" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223441 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="registry-server" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223605 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="dbc081ef-e1e7-4976-8234-fe6a1929df17" containerName="registry-server" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.223620 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="c330d8e9-b1e5-4741-985c-93516313a586" containerName="registry-server" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.227522 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.229896 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-6blwl\"" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.230120 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.234197 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2"] Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.349364 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.350286 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrck\" (UniqueName: \"kubernetes.io/projected/30409beb-bbf3-4e0f-94fe-a2039b3d8985-kube-api-access-vcrck\") pod \"cert-manager-webhook-7894b5b9b4-hjdq2\" (UID: \"30409beb-bbf3-4e0f-94fe-a2039b3d8985\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.350361 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30409beb-bbf3-4e0f-94fe-a2039b3d8985-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-hjdq2\" (UID: \"30409beb-bbf3-4e0f-94fe-a2039b3d8985\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.389120 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5","Type":"ContainerStarted","Data":"eb2be96069e51abee31f4aaf2ce93adc981978783c238aa38f072b8d0c1b40ef"} Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.431130 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.379684616 podStartE2EDuration="37.431102317s" podCreationTimestamp="2025-12-12 15:33:38 +0000 UTC" firstStartedPulling="2025-12-12 15:33:39.336048789 +0000 UTC m=+848.146001300" lastFinishedPulling="2025-12-12 15:34:10.38746649 +0000 UTC m=+879.197419001" observedRunningTime="2025-12-12 15:34:15.424704824 +0000 UTC m=+884.234657355" watchObservedRunningTime="2025-12-12 15:34:15.431102317 +0000 UTC m=+884.241054828" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.452419 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrck\" (UniqueName: \"kubernetes.io/projected/30409beb-bbf3-4e0f-94fe-a2039b3d8985-kube-api-access-vcrck\") pod \"cert-manager-webhook-7894b5b9b4-hjdq2\" (UID: \"30409beb-bbf3-4e0f-94fe-a2039b3d8985\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.453421 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30409beb-bbf3-4e0f-94fe-a2039b3d8985-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-hjdq2\" (UID: \"30409beb-bbf3-4e0f-94fe-a2039b3d8985\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.480525 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30409beb-bbf3-4e0f-94fe-a2039b3d8985-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-hjdq2\" (UID: \"30409beb-bbf3-4e0f-94fe-a2039b3d8985\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.480749 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrck\" (UniqueName: \"kubernetes.io/projected/30409beb-bbf3-4e0f-94fe-a2039b3d8985-kube-api-access-vcrck\") pod \"cert-manager-webhook-7894b5b9b4-hjdq2\" (UID: \"30409beb-bbf3-4e0f-94fe-a2039b3d8985\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.667958 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.783725 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.784196 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:34:15 crc kubenswrapper[5123]: I1212 15:34:15.899666 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:34:16 crc kubenswrapper[5123]: I1212 15:34:16.233817 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2"] Dec 12 15:34:16 crc kubenswrapper[5123]: W1212 15:34:16.247187 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30409beb_bbf3_4e0f_94fe_a2039b3d8985.slice/crio-94020a6bb63c8ff52d881c6c15f8fc309531bcad0168356a74a569fba8c50224 WatchSource:0}: Error finding container 94020a6bb63c8ff52d881c6c15f8fc309531bcad0168356a74a569fba8c50224: Status 404 returned error can't find the container with id 94020a6bb63c8ff52d881c6c15f8fc309531bcad0168356a74a569fba8c50224 Dec 12 15:34:16 crc kubenswrapper[5123]: I1212 15:34:16.401638 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" event={"ID":"30409beb-bbf3-4e0f-94fe-a2039b3d8985","Type":"ContainerStarted","Data":"94020a6bb63c8ff52d881c6c15f8fc309531bcad0168356a74a569fba8c50224"} Dec 12 15:34:16 crc kubenswrapper[5123]: I1212 15:34:16.402141 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:34:16 crc kubenswrapper[5123]: I1212 15:34:16.431591 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-7p94m" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.323304 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf"] Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.329980 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.341107 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-cscpj\"" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.341635 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf"] Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.378826 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45hnz\" (UniqueName: \"kubernetes.io/projected/bcd20537-cbbf-4071-b1b0-3d3d266798f3-kube-api-access-45hnz\") pod \"cert-manager-cainjector-7dbf76d5c8-wcjkf\" (UID: \"bcd20537-cbbf-4071-b1b0-3d3d266798f3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.379094 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd20537-cbbf-4071-b1b0-3d3d266798f3-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-wcjkf\" (UID: \"bcd20537-cbbf-4071-b1b0-3d3d266798f3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.597145 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd20537-cbbf-4071-b1b0-3d3d266798f3-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-wcjkf\" (UID: \"bcd20537-cbbf-4071-b1b0-3d3d266798f3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.597306 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45hnz\" (UniqueName: \"kubernetes.io/projected/bcd20537-cbbf-4071-b1b0-3d3d266798f3-kube-api-access-45hnz\") pod \"cert-manager-cainjector-7dbf76d5c8-wcjkf\" (UID: \"bcd20537-cbbf-4071-b1b0-3d3d266798f3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.625713 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45hnz\" (UniqueName: \"kubernetes.io/projected/bcd20537-cbbf-4071-b1b0-3d3d266798f3-kube-api-access-45hnz\") pod \"cert-manager-cainjector-7dbf76d5c8-wcjkf\" (UID: \"bcd20537-cbbf-4071-b1b0-3d3d266798f3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.630612 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd20537-cbbf-4071-b1b0-3d3d266798f3-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-wcjkf\" (UID: \"bcd20537-cbbf-4071-b1b0-3d3d266798f3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.658477 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" Dec 12 15:34:18 crc kubenswrapper[5123]: I1212 15:34:18.959360 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf"] Dec 12 15:34:19 crc kubenswrapper[5123]: I1212 15:34:19.439931 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" event={"ID":"bcd20537-cbbf-4071-b1b0-3d3d266798f3","Type":"ContainerStarted","Data":"8e2ccbe55742355f2ed9cfcb25758a4f52c886a9092a74edc67f6d19ed0fd329"} Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.670661 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5"] Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.684629 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.688716 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5"] Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.720528 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.720958 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.721087 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtbc2\" (UniqueName: \"kubernetes.io/projected/38c85702-10e7-4a8a-b082-74d25a6c3526-kube-api-access-rtbc2\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.822972 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rtbc2\" (UniqueName: \"kubernetes.io/projected/38c85702-10e7-4a8a-b082-74d25a6c3526-kube-api-access-rtbc2\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.823063 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.823103 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.824460 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-bundle\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.825830 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-util\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:20 crc kubenswrapper[5123]: I1212 15:34:20.859506 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtbc2\" (UniqueName: \"kubernetes.io/projected/38c85702-10e7-4a8a-b082-74d25a6c3526-kube-api-access-rtbc2\") pod \"8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.016540 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.312440 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl"] Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.328783 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl"] Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.329153 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.333393 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.379843 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.379928 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8tw\" (UniqueName: \"kubernetes.io/projected/5c82be41-adbe-45fa-a0df-f4884af99184-kube-api-access-rc8tw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.379968 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.481819 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.481920 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8tw\" (UniqueName: \"kubernetes.io/projected/5c82be41-adbe-45fa-a0df-f4884af99184-kube-api-access-rc8tw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.481961 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.482548 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.482595 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.513646 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8tw\" (UniqueName: \"kubernetes.io/projected/5c82be41-adbe-45fa-a0df-f4884af99184-kube-api-access-rc8tw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:21 crc kubenswrapper[5123]: I1212 15:34:21.666861 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.527205 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6"] Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.709674 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6"] Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.709946 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.835526 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdl58\" (UniqueName: \"kubernetes.io/projected/2742c57f-506f-4854-9ca2-4f57ab8173d1-kube-api-access-gdl58\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.835600 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.835641 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.937390 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdl58\" (UniqueName: \"kubernetes.io/projected/2742c57f-506f-4854-9ca2-4f57ab8173d1-kube-api-access-gdl58\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.937656 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.937781 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.938262 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-util\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.938377 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-bundle\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:22 crc kubenswrapper[5123]: I1212 15:34:22.958828 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdl58\" (UniqueName: \"kubernetes.io/projected/2742c57f-506f-4854-9ca2-4f57ab8173d1-kube-api-access-gdl58\") pod \"a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:23 crc kubenswrapper[5123]: I1212 15:34:23.092159 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.438906 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-jlfl8"] Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.454760 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-jlfl8"] Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.455023 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.475358 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-5wqck\"" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.633816 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkntb\" (UniqueName: \"kubernetes.io/projected/4030e433-7617-4fa9-92bb-7f154c0233ee-kube-api-access-lkntb\") pod \"cert-manager-858d87f86b-jlfl8\" (UID: \"4030e433-7617-4fa9-92bb-7f154c0233ee\") " pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.634209 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4030e433-7617-4fa9-92bb-7f154c0233ee-bound-sa-token\") pod \"cert-manager-858d87f86b-jlfl8\" (UID: \"4030e433-7617-4fa9-92bb-7f154c0233ee\") " pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.737836 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lkntb\" (UniqueName: \"kubernetes.io/projected/4030e433-7617-4fa9-92bb-7f154c0233ee-kube-api-access-lkntb\") pod \"cert-manager-858d87f86b-jlfl8\" (UID: \"4030e433-7617-4fa9-92bb-7f154c0233ee\") " pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.738146 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4030e433-7617-4fa9-92bb-7f154c0233ee-bound-sa-token\") pod \"cert-manager-858d87f86b-jlfl8\" (UID: \"4030e433-7617-4fa9-92bb-7f154c0233ee\") " pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.764303 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4030e433-7617-4fa9-92bb-7f154c0233ee-bound-sa-token\") pod \"cert-manager-858d87f86b-jlfl8\" (UID: \"4030e433-7617-4fa9-92bb-7f154c0233ee\") " pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.764466 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkntb\" (UniqueName: \"kubernetes.io/projected/4030e433-7617-4fa9-92bb-7f154c0233ee-kube-api-access-lkntb\") pod \"cert-manager-858d87f86b-jlfl8\" (UID: \"4030e433-7617-4fa9-92bb-7f154c0233ee\") " pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:26 crc kubenswrapper[5123]: I1212 15:34:26.780865 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-jlfl8" Dec 12 15:34:27 crc kubenswrapper[5123]: I1212 15:34:27.541154 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5" containerName="elasticsearch" probeResult="failure" output=< Dec 12 15:34:27 crc kubenswrapper[5123]: {"timestamp": "2025-12-12T15:34:27+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 15:34:27 crc kubenswrapper[5123]: > Dec 12 15:34:30 crc kubenswrapper[5123]: I1212 15:34:30.902916 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:34:30 crc kubenswrapper[5123]: I1212 15:34:30.903401 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:34:32 crc kubenswrapper[5123]: I1212 15:34:32.488498 5123 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a4b7933d-a5d4-45b8-8aaf-bec333ffcfe5" containerName="elasticsearch" probeResult="failure" output=< Dec 12 15:34:32 crc kubenswrapper[5123]: {"timestamp": "2025-12-12T15:34:32+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 15:34:32 crc kubenswrapper[5123]: > Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.008331 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.008330 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.018765 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27rm2_3ef15793-fa49-4c37-a355-d4573977e301/kube-multus/0.log" Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.019504 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27rm2_3ef15793-fa49-4c37-a355-d4573977e301/kube-multus/0.log" Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.032546 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.033351 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.226162 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-jlfl8"] Dec 12 15:34:33 crc kubenswrapper[5123]: W1212 15:34:33.240958 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4030e433_7617_4fa9_92bb_7f154c0233ee.slice/crio-750ead147355bd49f43fd321f1b0b8f73bc89f710491958fab0cd4f98c6879cd WatchSource:0}: Error finding container 750ead147355bd49f43fd321f1b0b8f73bc89f710491958fab0cd4f98c6879cd: Status 404 returned error can't find the container with id 750ead147355bd49f43fd321f1b0b8f73bc89f710491958fab0cd4f98c6879cd Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.281668 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5"] Dec 12 15:34:33 crc kubenswrapper[5123]: W1212 15:34:33.284662 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38c85702_10e7_4a8a_b082_74d25a6c3526.slice/crio-0fa0c3b93c75b6a05f15c2e9b73e0b54b6e68b42fb8d74f900a732e76eb00eb7 WatchSource:0}: Error finding container 0fa0c3b93c75b6a05f15c2e9b73e0b54b6e68b42fb8d74f900a732e76eb00eb7: Status 404 returned error can't find the container with id 0fa0c3b93c75b6a05f15c2e9b73e0b54b6e68b42fb8d74f900a732e76eb00eb7 Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.364100 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6"] Dec 12 15:34:33 crc kubenswrapper[5123]: W1212 15:34:33.370189 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c82be41_adbe_45fa_a0df_f4884af99184.slice/crio-e03bba607cb1de4d3e932a8d8fad313de33690c8398df02b49827a588ed16fe7 WatchSource:0}: Error finding container e03bba607cb1de4d3e932a8d8fad313de33690c8398df02b49827a588ed16fe7: Status 404 returned error can't find the container with id e03bba607cb1de4d3e932a8d8fad313de33690c8398df02b49827a588ed16fe7 Dec 12 15:34:33 crc kubenswrapper[5123]: I1212 15:34:33.373185 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl"] Dec 12 15:34:33 crc kubenswrapper[5123]: W1212 15:34:33.374831 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2742c57f_506f_4854_9ca2_4f57ab8173d1.slice/crio-83e2099fdb5d7475980a0d5c9bb0be80f94d95233bce2b93b141cde0f1c1a702 WatchSource:0}: Error finding container 83e2099fdb5d7475980a0d5c9bb0be80f94d95233bce2b93b141cde0f1c1a702: Status 404 returned error can't find the container with id 83e2099fdb5d7475980a0d5c9bb0be80f94d95233bce2b93b141cde0f1c1a702 Dec 12 15:34:34 crc kubenswrapper[5123]: I1212 15:34:34.096357 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" event={"ID":"bcd20537-cbbf-4071-b1b0-3d3d266798f3","Type":"ContainerStarted","Data":"5e3b5343f6da58dec121504d14026ba115580279d91780530812d89666ff8413"} Dec 12 15:34:34 crc kubenswrapper[5123]: I1212 15:34:34.097389 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-jlfl8" event={"ID":"4030e433-7617-4fa9-92bb-7f154c0233ee","Type":"ContainerStarted","Data":"750ead147355bd49f43fd321f1b0b8f73bc89f710491958fab0cd4f98c6879cd"} Dec 12 15:34:34 crc kubenswrapper[5123]: I1212 15:34:34.098724 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" event={"ID":"2742c57f-506f-4854-9ca2-4f57ab8173d1","Type":"ContainerStarted","Data":"83e2099fdb5d7475980a0d5c9bb0be80f94d95233bce2b93b141cde0f1c1a702"} Dec 12 15:34:34 crc kubenswrapper[5123]: I1212 15:34:34.100005 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" event={"ID":"5c82be41-adbe-45fa-a0df-f4884af99184","Type":"ContainerStarted","Data":"e03bba607cb1de4d3e932a8d8fad313de33690c8398df02b49827a588ed16fe7"} Dec 12 15:34:34 crc kubenswrapper[5123]: I1212 15:34:34.101884 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" event={"ID":"38c85702-10e7-4a8a-b082-74d25a6c3526","Type":"ContainerStarted","Data":"0fa0c3b93c75b6a05f15c2e9b73e0b54b6e68b42fb8d74f900a732e76eb00eb7"} Dec 12 15:34:35 crc kubenswrapper[5123]: I1212 15:34:35.119848 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" event={"ID":"30409beb-bbf3-4e0f-94fe-a2039b3d8985","Type":"ContainerStarted","Data":"f354af4f5359095687367a9e1ab7c32e56e8782755e70ab214ef3b1e3aee13b9"} Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.143454 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-jlfl8" event={"ID":"4030e433-7617-4fa9-92bb-7f154c0233ee","Type":"ContainerStarted","Data":"bf6365e26e10247a3f1817aa5e81395106395538a69efea7ad25fe32e13408bf"} Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.146141 5123 generic.go:358] "Generic (PLEG): container finished" podID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerID="3bd4671ffad75ca6a9accd9b0e52972427e8feb4d9a925b361af3d78cd2b662f" exitCode=0 Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.146628 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" event={"ID":"2742c57f-506f-4854-9ca2-4f57ab8173d1","Type":"ContainerDied","Data":"3bd4671ffad75ca6a9accd9b0e52972427e8feb4d9a925b361af3d78cd2b662f"} Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.154641 5123 generic.go:358] "Generic (PLEG): container finished" podID="5c82be41-adbe-45fa-a0df-f4884af99184" containerID="4520d000d3ebe77dd545d92ebaed133ea8f5f426c40039ef66490a54c26fd03b" exitCode=0 Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.155210 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" event={"ID":"5c82be41-adbe-45fa-a0df-f4884af99184","Type":"ContainerDied","Data":"4520d000d3ebe77dd545d92ebaed133ea8f5f426c40039ef66490a54c26fd03b"} Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.160094 5123 generic.go:358] "Generic (PLEG): container finished" podID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerID="8c4b496fa2278f3f8faa1da36e2de3496f3ca4fb1ec73a9c4e80ca397f74caeb" exitCode=0 Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.161441 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" event={"ID":"38c85702-10e7-4a8a-b082-74d25a6c3526","Type":"ContainerDied","Data":"8c4b496fa2278f3f8faa1da36e2de3496f3ca4fb1ec73a9c4e80ca397f74caeb"} Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.161781 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.172287 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-jlfl8" podStartSLOduration=10.172241778 podStartE2EDuration="10.172241778s" podCreationTimestamp="2025-12-12 15:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:34:36.16665371 +0000 UTC m=+904.976606221" watchObservedRunningTime="2025-12-12 15:34:36.172241778 +0000 UTC m=+904.982194309" Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.220725 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" podStartSLOduration=4.392938023 podStartE2EDuration="21.220701525s" podCreationTimestamp="2025-12-12 15:34:15 +0000 UTC" firstStartedPulling="2025-12-12 15:34:16.249990277 +0000 UTC m=+885.059942788" lastFinishedPulling="2025-12-12 15:34:33.077753779 +0000 UTC m=+901.887706290" observedRunningTime="2025-12-12 15:34:36.197507179 +0000 UTC m=+905.007459700" watchObservedRunningTime="2025-12-12 15:34:36.220701525 +0000 UTC m=+905.030654036" Dec 12 15:34:36 crc kubenswrapper[5123]: I1212 15:34:36.222739 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-wcjkf" podStartSLOduration=4.152975468 podStartE2EDuration="18.222725659s" podCreationTimestamp="2025-12-12 15:34:18 +0000 UTC" firstStartedPulling="2025-12-12 15:34:18.969111095 +0000 UTC m=+887.779063606" lastFinishedPulling="2025-12-12 15:34:33.038861286 +0000 UTC m=+901.848813797" observedRunningTime="2025-12-12 15:34:36.219970562 +0000 UTC m=+905.029923083" watchObservedRunningTime="2025-12-12 15:34:36.222725659 +0000 UTC m=+905.032678170" Dec 12 15:34:38 crc kubenswrapper[5123]: I1212 15:34:38.175106 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 15:34:38 crc kubenswrapper[5123]: I1212 15:34:38.181317 5123 generic.go:358] "Generic (PLEG): container finished" podID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerID="a5f997d8a03ee411c3062de59684b3a08356442c6da6aae7a386574fe71e8672" exitCode=0 Dec 12 15:34:38 crc kubenswrapper[5123]: I1212 15:34:38.181464 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" event={"ID":"2742c57f-506f-4854-9ca2-4f57ab8173d1","Type":"ContainerDied","Data":"a5f997d8a03ee411c3062de59684b3a08356442c6da6aae7a386574fe71e8672"} Dec 12 15:34:39 crc kubenswrapper[5123]: I1212 15:34:39.191019 5123 generic.go:358] "Generic (PLEG): container finished" podID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerID="7acd88f60ca416eacf9ceef8a2bd414a544c7d2b03d9ad4951dd68821866473c" exitCode=0 Dec 12 15:34:39 crc kubenswrapper[5123]: I1212 15:34:39.191344 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" event={"ID":"2742c57f-506f-4854-9ca2-4f57ab8173d1","Type":"ContainerDied","Data":"7acd88f60ca416eacf9ceef8a2bd414a544c7d2b03d9ad4951dd68821866473c"} Dec 12 15:34:39 crc kubenswrapper[5123]: I1212 15:34:39.195237 5123 generic.go:358] "Generic (PLEG): container finished" podID="5c82be41-adbe-45fa-a0df-f4884af99184" containerID="e077362e272ff76e3c03c30d7a3a66b8c9effc91f3044605a7fdb9916d72a9a6" exitCode=0 Dec 12 15:34:39 crc kubenswrapper[5123]: I1212 15:34:39.195415 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" event={"ID":"5c82be41-adbe-45fa-a0df-f4884af99184","Type":"ContainerDied","Data":"e077362e272ff76e3c03c30d7a3a66b8c9effc91f3044605a7fdb9916d72a9a6"} Dec 12 15:34:39 crc kubenswrapper[5123]: I1212 15:34:39.199477 5123 generic.go:358] "Generic (PLEG): container finished" podID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerID="f840e2e0d20643b959c3646c8ae778ef89c7f87cef9f6fee6b0c7db6b42b5980" exitCode=0 Dec 12 15:34:39 crc kubenswrapper[5123]: I1212 15:34:39.199599 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" event={"ID":"38c85702-10e7-4a8a-b082-74d25a6c3526","Type":"ContainerDied","Data":"f840e2e0d20643b959c3646c8ae778ef89c7f87cef9f6fee6b0c7db6b42b5980"} Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.211338 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" event={"ID":"5c82be41-adbe-45fa-a0df-f4884af99184","Type":"ContainerStarted","Data":"3eddf5b767f75fe84d401f1bdcc37dc54be0512786d4e73f6df664d3e8cdbe43"} Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.218782 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" event={"ID":"38c85702-10e7-4a8a-b082-74d25a6c3526","Type":"ContainerStarted","Data":"0d775da7480cd06d1b86f3bd313672cf82007acba2ed08481556a70e50100534"} Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.237895 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" podStartSLOduration=16.894227791 podStartE2EDuration="19.237867456s" podCreationTimestamp="2025-12-12 15:34:21 +0000 UTC" firstStartedPulling="2025-12-12 15:34:36.158008606 +0000 UTC m=+904.967961107" lastFinishedPulling="2025-12-12 15:34:38.501648261 +0000 UTC m=+907.311600772" observedRunningTime="2025-12-12 15:34:40.236636446 +0000 UTC m=+909.046588957" watchObservedRunningTime="2025-12-12 15:34:40.237867456 +0000 UTC m=+909.047819977" Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.261124 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" podStartSLOduration=18.763885381 podStartE2EDuration="20.261102872s" podCreationTimestamp="2025-12-12 15:34:20 +0000 UTC" firstStartedPulling="2025-12-12 15:34:36.163266323 +0000 UTC m=+904.973218834" lastFinishedPulling="2025-12-12 15:34:37.660483814 +0000 UTC m=+906.470436325" observedRunningTime="2025-12-12 15:34:40.259517162 +0000 UTC m=+909.069469693" watchObservedRunningTime="2025-12-12 15:34:40.261102872 +0000 UTC m=+909.071055383" Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.720505 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.733443 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-util\") pod \"2742c57f-506f-4854-9ca2-4f57ab8173d1\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.733900 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-bundle\") pod \"2742c57f-506f-4854-9ca2-4f57ab8173d1\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.734092 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdl58\" (UniqueName: \"kubernetes.io/projected/2742c57f-506f-4854-9ca2-4f57ab8173d1-kube-api-access-gdl58\") pod \"2742c57f-506f-4854-9ca2-4f57ab8173d1\" (UID: \"2742c57f-506f-4854-9ca2-4f57ab8173d1\") " Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.734546 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-bundle" (OuterVolumeSpecName: "bundle") pod "2742c57f-506f-4854-9ca2-4f57ab8173d1" (UID: "2742c57f-506f-4854-9ca2-4f57ab8173d1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.742317 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2742c57f-506f-4854-9ca2-4f57ab8173d1-kube-api-access-gdl58" (OuterVolumeSpecName: "kube-api-access-gdl58") pod "2742c57f-506f-4854-9ca2-4f57ab8173d1" (UID: "2742c57f-506f-4854-9ca2-4f57ab8173d1"). InnerVolumeSpecName "kube-api-access-gdl58". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.836319 5123 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:40 crc kubenswrapper[5123]: I1212 15:34:40.836378 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdl58\" (UniqueName: \"kubernetes.io/projected/2742c57f-506f-4854-9ca2-4f57ab8173d1-kube-api-access-gdl58\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.231286 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.231284 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/a37d2feed04fa147ea45c25cb64a74003bcaf6cda2941cce1e3a2ba788rtxj6" event={"ID":"2742c57f-506f-4854-9ca2-4f57ab8173d1","Type":"ContainerDied","Data":"83e2099fdb5d7475980a0d5c9bb0be80f94d95233bce2b93b141cde0f1c1a702"} Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.231530 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e2099fdb5d7475980a0d5c9bb0be80f94d95233bce2b93b141cde0f1c1a702" Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.233579 5123 generic.go:358] "Generic (PLEG): container finished" podID="5c82be41-adbe-45fa-a0df-f4884af99184" containerID="3eddf5b767f75fe84d401f1bdcc37dc54be0512786d4e73f6df664d3e8cdbe43" exitCode=0 Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.233657 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" event={"ID":"5c82be41-adbe-45fa-a0df-f4884af99184","Type":"ContainerDied","Data":"3eddf5b767f75fe84d401f1bdcc37dc54be0512786d4e73f6df664d3e8cdbe43"} Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.235795 5123 generic.go:358] "Generic (PLEG): container finished" podID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerID="0d775da7480cd06d1b86f3bd313672cf82007acba2ed08481556a70e50100534" exitCode=0 Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.235919 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" event={"ID":"38c85702-10e7-4a8a-b082-74d25a6c3526","Type":"ContainerDied","Data":"0d775da7480cd06d1b86f3bd313672cf82007acba2ed08481556a70e50100534"} Dec 12 15:34:41 crc kubenswrapper[5123]: I1212 15:34:41.783583 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-util" (OuterVolumeSpecName: "util") pod "2742c57f-506f-4854-9ca2-4f57ab8173d1" (UID: "2742c57f-506f-4854-9ca2-4f57ab8173d1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:41.797190 5123 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2742c57f-506f-4854-9ca2-4f57ab8173d1-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.178043 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-hjdq2" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.603338 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.606122 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.621369 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-bundle\") pod \"38c85702-10e7-4a8a-b082-74d25a6c3526\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.621446 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtbc2\" (UniqueName: \"kubernetes.io/projected/38c85702-10e7-4a8a-b082-74d25a6c3526-kube-api-access-rtbc2\") pod \"38c85702-10e7-4a8a-b082-74d25a6c3526\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.621508 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-bundle\") pod \"5c82be41-adbe-45fa-a0df-f4884af99184\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.621594 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc8tw\" (UniqueName: \"kubernetes.io/projected/5c82be41-adbe-45fa-a0df-f4884af99184-kube-api-access-rc8tw\") pod \"5c82be41-adbe-45fa-a0df-f4884af99184\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.621691 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-util\") pod \"38c85702-10e7-4a8a-b082-74d25a6c3526\" (UID: \"38c85702-10e7-4a8a-b082-74d25a6c3526\") " Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.621730 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-util\") pod \"5c82be41-adbe-45fa-a0df-f4884af99184\" (UID: \"5c82be41-adbe-45fa-a0df-f4884af99184\") " Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.623333 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-bundle" (OuterVolumeSpecName: "bundle") pod "5c82be41-adbe-45fa-a0df-f4884af99184" (UID: "5c82be41-adbe-45fa-a0df-f4884af99184"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.623830 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-bundle" (OuterVolumeSpecName: "bundle") pod "38c85702-10e7-4a8a-b082-74d25a6c3526" (UID: "38c85702-10e7-4a8a-b082-74d25a6c3526"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.630056 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c82be41-adbe-45fa-a0df-f4884af99184-kube-api-access-rc8tw" (OuterVolumeSpecName: "kube-api-access-rc8tw") pod "5c82be41-adbe-45fa-a0df-f4884af99184" (UID: "5c82be41-adbe-45fa-a0df-f4884af99184"). InnerVolumeSpecName "kube-api-access-rc8tw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.634783 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c85702-10e7-4a8a-b082-74d25a6c3526-kube-api-access-rtbc2" (OuterVolumeSpecName: "kube-api-access-rtbc2") pod "38c85702-10e7-4a8a-b082-74d25a6c3526" (UID: "38c85702-10e7-4a8a-b082-74d25a6c3526"). InnerVolumeSpecName "kube-api-access-rtbc2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.641322 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-util" (OuterVolumeSpecName: "util") pod "5c82be41-adbe-45fa-a0df-f4884af99184" (UID: "5c82be41-adbe-45fa-a0df-f4884af99184"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.643403 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-util" (OuterVolumeSpecName: "util") pod "38c85702-10e7-4a8a-b082-74d25a6c3526" (UID: "38c85702-10e7-4a8a-b082-74d25a6c3526"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.801611 5123 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.801670 5123 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-util\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.801685 5123 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38c85702-10e7-4a8a-b082-74d25a6c3526-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.801700 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtbc2\" (UniqueName: \"kubernetes.io/projected/38c85702-10e7-4a8a-b082-74d25a6c3526-kube-api-access-rtbc2\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.801728 5123 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c82be41-adbe-45fa-a0df-f4884af99184-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:42 crc kubenswrapper[5123]: I1212 15:34:42.801740 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rc8tw\" (UniqueName: \"kubernetes.io/projected/5c82be41-adbe-45fa-a0df-f4884af99184-kube-api-access-rc8tw\") on node \"crc\" DevicePath \"\"" Dec 12 15:34:43 crc kubenswrapper[5123]: I1212 15:34:43.259143 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" Dec 12 15:34:43 crc kubenswrapper[5123]: I1212 15:34:43.259150 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/8ee8c55189da7090d5ccec56827ddc8c72a44413647227f2ae0842e7477g8f5" event={"ID":"38c85702-10e7-4a8a-b082-74d25a6c3526","Type":"ContainerDied","Data":"0fa0c3b93c75b6a05f15c2e9b73e0b54b6e68b42fb8d74f900a732e76eb00eb7"} Dec 12 15:34:43 crc kubenswrapper[5123]: I1212 15:34:43.259684 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fa0c3b93c75b6a05f15c2e9b73e0b54b6e68b42fb8d74f900a732e76eb00eb7" Dec 12 15:34:43 crc kubenswrapper[5123]: I1212 15:34:43.270988 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" event={"ID":"5c82be41-adbe-45fa-a0df-f4884af99184","Type":"ContainerDied","Data":"e03bba607cb1de4d3e932a8d8fad313de33690c8398df02b49827a588ed16fe7"} Dec 12 15:34:43 crc kubenswrapper[5123]: I1212 15:34:43.271066 5123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e03bba607cb1de4d3e932a8d8fad313de33690c8398df02b49827a588ed16fe7" Dec 12 15:34:43 crc kubenswrapper[5123]: I1212 15:34:43.274328 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fks6xl" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.047920 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-kngv6"] Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049588 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5c82be41-adbe-45fa-a0df-f4884af99184" containerName="util" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049619 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c82be41-adbe-45fa-a0df-f4884af99184" containerName="util" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049634 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerName="pull" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049641 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerName="pull" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049654 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerName="pull" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049663 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerName="pull" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049682 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049689 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049700 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049707 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049724 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5c82be41-adbe-45fa-a0df-f4884af99184" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049731 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c82be41-adbe-45fa-a0df-f4884af99184" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049741 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerName="util" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049747 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerName="util" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049758 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5c82be41-adbe-45fa-a0df-f4884af99184" containerName="pull" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049765 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c82be41-adbe-45fa-a0df-f4884af99184" containerName="pull" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049787 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerName="util" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049794 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerName="util" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049964 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="2742c57f-506f-4854-9ca2-4f57ab8173d1" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049983 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="38c85702-10e7-4a8a-b082-74d25a6c3526" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.049994 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="5c82be41-adbe-45fa-a0df-f4884af99184" containerName="extract" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.869169 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-kngv6"] Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.869413 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.874294 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-n7nkr\"" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.993431 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjq9\" (UniqueName: \"kubernetes.io/projected/8a3b1c17-6c1e-4a89-9149-f800ae13d1d4-kube-api-access-bhjq9\") pod \"service-telemetry-operator-ccf9cd448-kngv6\" (UID: \"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:50 crc kubenswrapper[5123]: I1212 15:34:50.993612 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8a3b1c17-6c1e-4a89-9149-f800ae13d1d4-runner\") pod \"service-telemetry-operator-ccf9cd448-kngv6\" (UID: \"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:51 crc kubenswrapper[5123]: I1212 15:34:51.095773 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjq9\" (UniqueName: \"kubernetes.io/projected/8a3b1c17-6c1e-4a89-9149-f800ae13d1d4-kube-api-access-bhjq9\") pod \"service-telemetry-operator-ccf9cd448-kngv6\" (UID: \"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:51 crc kubenswrapper[5123]: I1212 15:34:51.096043 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8a3b1c17-6c1e-4a89-9149-f800ae13d1d4-runner\") pod \"service-telemetry-operator-ccf9cd448-kngv6\" (UID: \"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:51 crc kubenswrapper[5123]: I1212 15:34:51.096811 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8a3b1c17-6c1e-4a89-9149-f800ae13d1d4-runner\") pod \"service-telemetry-operator-ccf9cd448-kngv6\" (UID: \"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:51 crc kubenswrapper[5123]: I1212 15:34:51.123057 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjq9\" (UniqueName: \"kubernetes.io/projected/8a3b1c17-6c1e-4a89-9149-f800ae13d1d4-kube-api-access-bhjq9\") pod \"service-telemetry-operator-ccf9cd448-kngv6\" (UID: \"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4\") " pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:51 crc kubenswrapper[5123]: I1212 15:34:51.191978 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" Dec 12 15:34:51 crc kubenswrapper[5123]: I1212 15:34:51.733316 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-ccf9cd448-kngv6"] Dec 12 15:34:52 crc kubenswrapper[5123]: I1212 15:34:52.770747 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" event={"ID":"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4","Type":"ContainerStarted","Data":"dcb89fe48bb9f0acb2062f315802fde70ede48c420aab98c0da37c13ac68d304"} Dec 12 15:34:52 crc kubenswrapper[5123]: I1212 15:34:52.840895 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-vlmxv"] Dec 12 15:34:53 crc kubenswrapper[5123]: I1212 15:34:53.765106 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" Dec 12 15:34:53 crc kubenswrapper[5123]: I1212 15:34:53.772239 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-wt2bd\"" Dec 12 15:34:53 crc kubenswrapper[5123]: I1212 15:34:53.776008 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-vlmxv"] Dec 12 15:34:53 crc kubenswrapper[5123]: I1212 15:34:53.776049 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-dgrxn"] Dec 12 15:34:53 crc kubenswrapper[5123]: I1212 15:34:53.797821 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/bfdf24ff-9a73-4982-bde6-decb1b7ea57b-kube-api-access-b7zkj\") pod \"interconnect-operator-78b9bd8798-vlmxv\" (UID: \"bfdf24ff-9a73-4982-bde6-decb1b7ea57b\") " pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" Dec 12 15:34:53 crc kubenswrapper[5123]: I1212 15:34:53.899272 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/bfdf24ff-9a73-4982-bde6-decb1b7ea57b-kube-api-access-b7zkj\") pod \"interconnect-operator-78b9bd8798-vlmxv\" (UID: \"bfdf24ff-9a73-4982-bde6-decb1b7ea57b\") " pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" Dec 12 15:34:53 crc kubenswrapper[5123]: I1212 15:34:53.923513 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/bfdf24ff-9a73-4982-bde6-decb1b7ea57b-kube-api-access-b7zkj\") pod \"interconnect-operator-78b9bd8798-vlmxv\" (UID: \"bfdf24ff-9a73-4982-bde6-decb1b7ea57b\") " pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" Dec 12 15:34:54 crc kubenswrapper[5123]: I1212 15:34:54.089243 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.237687 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" event={"ID":"bfdf24ff-9a73-4982-bde6-decb1b7ea57b","Type":"ContainerStarted","Data":"b580f949c32357245148ec738e18c638db9582a65c1e4560d6cc8d5b4c784c65"} Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.238138 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-dgrxn"] Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.238191 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-vlmxv"] Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.237984 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.241364 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-6sbx9\"" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.581470 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dsht\" (UniqueName: \"kubernetes.io/projected/f03bdc18-399e-4ddd-846d-0943a21064d3-kube-api-access-5dsht\") pod \"smart-gateway-operator-5766884c8f-dgrxn\" (UID: \"f03bdc18-399e-4ddd-846d-0943a21064d3\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.582483 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f03bdc18-399e-4ddd-846d-0943a21064d3-runner\") pod \"smart-gateway-operator-5766884c8f-dgrxn\" (UID: \"f03bdc18-399e-4ddd-846d-0943a21064d3\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.683684 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5dsht\" (UniqueName: \"kubernetes.io/projected/f03bdc18-399e-4ddd-846d-0943a21064d3-kube-api-access-5dsht\") pod \"smart-gateway-operator-5766884c8f-dgrxn\" (UID: \"f03bdc18-399e-4ddd-846d-0943a21064d3\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.685853 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f03bdc18-399e-4ddd-846d-0943a21064d3-runner\") pod \"smart-gateway-operator-5766884c8f-dgrxn\" (UID: \"f03bdc18-399e-4ddd-846d-0943a21064d3\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.687202 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f03bdc18-399e-4ddd-846d-0943a21064d3-runner\") pod \"smart-gateway-operator-5766884c8f-dgrxn\" (UID: \"f03bdc18-399e-4ddd-846d-0943a21064d3\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.736856 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dsht\" (UniqueName: \"kubernetes.io/projected/f03bdc18-399e-4ddd-846d-0943a21064d3-kube-api-access-5dsht\") pod \"smart-gateway-operator-5766884c8f-dgrxn\" (UID: \"f03bdc18-399e-4ddd-846d-0943a21064d3\") " pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:55 crc kubenswrapper[5123]: I1212 15:34:55.860034 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" Dec 12 15:34:56 crc kubenswrapper[5123]: I1212 15:34:56.708099 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5766884c8f-dgrxn"] Dec 12 15:34:56 crc kubenswrapper[5123]: I1212 15:34:56.816701 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" event={"ID":"f03bdc18-399e-4ddd-846d-0943a21064d3","Type":"ContainerStarted","Data":"f85a6055ae9e0dca41d3d9ef6d371d44f6ab9c2eb16cb020b669f4a6f7711167"} Dec 12 15:35:00 crc kubenswrapper[5123]: I1212 15:35:00.902508 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:35:00 crc kubenswrapper[5123]: I1212 15:35:00.907535 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:35:00 crc kubenswrapper[5123]: I1212 15:35:00.907623 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:35:00 crc kubenswrapper[5123]: I1212 15:35:00.908471 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8b31bee9a490187d699071ec78132456a8a603d815d3195aabc642b4b346b89"} pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:35:00 crc kubenswrapper[5123]: I1212 15:35:00.908563 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" containerID="cri-o://b8b31bee9a490187d699071ec78132456a8a603d815d3195aabc642b4b346b89" gracePeriod=600 Dec 12 15:35:01 crc kubenswrapper[5123]: I1212 15:35:01.065885 5123 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:35:01 crc kubenswrapper[5123]: I1212 15:35:01.928903 5123 generic.go:358] "Generic (PLEG): container finished" podID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerID="b8b31bee9a490187d699071ec78132456a8a603d815d3195aabc642b4b346b89" exitCode=0 Dec 12 15:35:01 crc kubenswrapper[5123]: I1212 15:35:01.930015 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerDied","Data":"b8b31bee9a490187d699071ec78132456a8a603d815d3195aabc642b4b346b89"} Dec 12 15:35:01 crc kubenswrapper[5123]: I1212 15:35:01.930092 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"c2cf9081a67059ac5a079b8f43fd2aed11cbd262496baea709c4ede2e91cdc0e"} Dec 12 15:35:01 crc kubenswrapper[5123]: I1212 15:35:01.930124 5123 scope.go:117] "RemoveContainer" containerID="3606974b214ad9834bbb1da3a0fabe6877d1e0ef7f439301b0bf2a0adb538ba5" Dec 12 15:35:12 crc kubenswrapper[5123]: I1212 15:35:12.556669 5123 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:35:48 crc kubenswrapper[5123]: I1212 15:35:48.357668 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" event={"ID":"bfdf24ff-9a73-4982-bde6-decb1b7ea57b","Type":"ContainerStarted","Data":"5bc752d85de57e5cc11d6a9a877366d0944a24e8fbf64b01fb0bd2bac200f5d3"} Dec 12 15:35:48 crc kubenswrapper[5123]: I1212 15:35:48.361200 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" event={"ID":"8a3b1c17-6c1e-4a89-9149-f800ae13d1d4","Type":"ContainerStarted","Data":"24bca4cf5488a92af7a6650f9ce5cce948be8c7d0158926ed3a4732d08c66798"} Dec 12 15:35:48 crc kubenswrapper[5123]: I1212 15:35:48.363317 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" event={"ID":"f03bdc18-399e-4ddd-846d-0943a21064d3","Type":"ContainerStarted","Data":"dd87486f0fb0e7aae8c5fc60a8c7504ac9114c120f53181a547f73eb8b2e39ac"} Dec 12 15:35:48 crc kubenswrapper[5123]: I1212 15:35:48.384017 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-vlmxv" podStartSLOduration=15.477974281 podStartE2EDuration="56.38398792s" podCreationTimestamp="2025-12-12 15:34:52 +0000 UTC" firstStartedPulling="2025-12-12 15:34:54.529820438 +0000 UTC m=+923.339772949" lastFinishedPulling="2025-12-12 15:35:35.435834077 +0000 UTC m=+964.245786588" observedRunningTime="2025-12-12 15:35:48.383860666 +0000 UTC m=+977.193813177" watchObservedRunningTime="2025-12-12 15:35:48.38398792 +0000 UTC m=+977.193940431" Dec 12 15:35:48 crc kubenswrapper[5123]: I1212 15:35:48.416755 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-ccf9cd448-kngv6" podStartSLOduration=2.230036905 podStartE2EDuration="58.41672863s" podCreationTimestamp="2025-12-12 15:34:50 +0000 UTC" firstStartedPulling="2025-12-12 15:34:51.700461662 +0000 UTC m=+920.510414173" lastFinishedPulling="2025-12-12 15:35:47.887153387 +0000 UTC m=+976.697105898" observedRunningTime="2025-12-12 15:35:48.412049732 +0000 UTC m=+977.222002253" watchObservedRunningTime="2025-12-12 15:35:48.41672863 +0000 UTC m=+977.226681141" Dec 12 15:35:48 crc kubenswrapper[5123]: I1212 15:35:48.447137 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5766884c8f-dgrxn" podStartSLOduration=4.274350484 podStartE2EDuration="55.447109546s" podCreationTimestamp="2025-12-12 15:34:53 +0000 UTC" firstStartedPulling="2025-12-12 15:34:56.717419181 +0000 UTC m=+925.527371692" lastFinishedPulling="2025-12-12 15:35:47.890178243 +0000 UTC m=+976.700130754" observedRunningTime="2025-12-12 15:35:48.442606003 +0000 UTC m=+977.252558534" watchObservedRunningTime="2025-12-12 15:35:48.447109546 +0000 UTC m=+977.257062057" Dec 12 15:36:05 crc kubenswrapper[5123]: E1212 15:36:05.562396 5123 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 15:36:07 crc kubenswrapper[5123]: I1212 15:36:07.744441 5123 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 15:36:07 crc kubenswrapper[5123]: I1212 15:36:07.757682 5123 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 15:36:07 crc kubenswrapper[5123]: I1212 15:36:07.794937 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60326: no serving certificate available for the kubelet" Dec 12 15:36:07 crc kubenswrapper[5123]: I1212 15:36:07.892196 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60328: no serving certificate available for the kubelet" Dec 12 15:36:07 crc kubenswrapper[5123]: I1212 15:36:07.933462 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60344: no serving certificate available for the kubelet" Dec 12 15:36:07 crc kubenswrapper[5123]: I1212 15:36:07.980581 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60356: no serving certificate available for the kubelet" Dec 12 15:36:08 crc kubenswrapper[5123]: I1212 15:36:08.047464 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60360: no serving certificate available for the kubelet" Dec 12 15:36:08 crc kubenswrapper[5123]: I1212 15:36:08.158585 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60364: no serving certificate available for the kubelet" Dec 12 15:36:08 crc kubenswrapper[5123]: I1212 15:36:08.348994 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60372: no serving certificate available for the kubelet" Dec 12 15:36:08 crc kubenswrapper[5123]: I1212 15:36:08.699891 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60384: no serving certificate available for the kubelet" Dec 12 15:36:09 crc kubenswrapper[5123]: I1212 15:36:09.367674 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60394: no serving certificate available for the kubelet" Dec 12 15:36:10 crc kubenswrapper[5123]: I1212 15:36:10.682289 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60398: no serving certificate available for the kubelet" Dec 12 15:36:13 crc kubenswrapper[5123]: I1212 15:36:13.268602 5123 ???:1] "http: TLS handshake error from 192.168.126.11:60408: no serving certificate available for the kubelet" Dec 12 15:36:16 crc kubenswrapper[5123]: I1212 15:36:16.100170 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-cdkq7"] Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.214427 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-cdkq7"] Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.214624 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.218002 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.218008 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-gsb2j\"" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.218117 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.218137 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.218528 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.219425 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.220704 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.309793 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-config\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.310196 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.310364 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.310537 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-users\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.310770 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.311072 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.311284 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtds5\" (UniqueName: \"kubernetes.io/projected/1409556c-b0dc-4a13-803c-aa74f048c7a2-kube-api-access-gtds5\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.412561 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.412637 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gtds5\" (UniqueName: \"kubernetes.io/projected/1409556c-b0dc-4a13-803c-aa74f048c7a2-kube-api-access-gtds5\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.412682 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-config\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.412736 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.412774 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.412849 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-users\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.412914 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.414941 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-config\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.423796 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.425372 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.427153 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.427523 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.428555 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-users\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.438238 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtds5\" (UniqueName: \"kubernetes.io/projected/1409556c-b0dc-4a13-803c-aa74f048c7a2-kube-api-access-gtds5\") pod \"default-interconnect-55bf8d5cb-cdkq7\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:17 crc kubenswrapper[5123]: I1212 15:36:17.548620 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:36:18 crc kubenswrapper[5123]: I1212 15:36:18.424552 5123 ???:1] "http: TLS handshake error from 192.168.126.11:37432: no serving certificate available for the kubelet" Dec 12 15:36:18 crc kubenswrapper[5123]: I1212 15:36:18.439604 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-cdkq7"] Dec 12 15:36:18 crc kubenswrapper[5123]: I1212 15:36:18.949033 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" event={"ID":"1409556c-b0dc-4a13-803c-aa74f048c7a2","Type":"ContainerStarted","Data":"27cc93e95759c5f4dae19a679f87c2fe8de663aa5921311d468c538f541ea954"} Dec 12 15:36:26 crc kubenswrapper[5123]: I1212 15:36:26.015880 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" event={"ID":"1409556c-b0dc-4a13-803c-aa74f048c7a2","Type":"ContainerStarted","Data":"ce52f4548cc859977688e4ed3d183c58dbc2319fe85bbea75ab9d50ba7a8fc19"} Dec 12 15:36:26 crc kubenswrapper[5123]: I1212 15:36:26.047873 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" podStartSLOduration=3.318672449 podStartE2EDuration="10.047833198s" podCreationTimestamp="2025-12-12 15:36:16 +0000 UTC" firstStartedPulling="2025-12-12 15:36:18.453897391 +0000 UTC m=+1007.263849912" lastFinishedPulling="2025-12-12 15:36:25.18305816 +0000 UTC m=+1013.993010661" observedRunningTime="2025-12-12 15:36:26.037691905 +0000 UTC m=+1014.847644436" watchObservedRunningTime="2025-12-12 15:36:26.047833198 +0000 UTC m=+1014.857785709" Dec 12 15:36:28 crc kubenswrapper[5123]: I1212 15:36:28.696526 5123 ???:1] "http: TLS handshake error from 192.168.126.11:53866: no serving certificate available for the kubelet" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.434637 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.504664 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.504899 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.511019 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.511353 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.511528 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.511993 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-c5cl9\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.512189 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.512469 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.512698 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.513080 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702308 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702485 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-tls-assets\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702636 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-config\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702662 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702789 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702847 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-web-config\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702908 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-config-out\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702937 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.702988 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.703104 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skzkb\" (UniqueName: \"kubernetes.io/projected/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-kube-api-access-skzkb\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.805095 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.805663 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-tls-assets\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.805802 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-config\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.805842 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.805978 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.806071 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-web-config\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.806156 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-config-out\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.806206 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.806262 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.806360 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-skzkb\" (UniqueName: \"kubernetes.io/projected/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-kube-api-access-skzkb\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: E1212 15:36:30.806527 5123 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 15:36:30 crc kubenswrapper[5123]: E1212 15:36:30.806611 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls podName:1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc nodeName:}" failed. No retries permitted until 2025-12-12 15:36:31.306584613 +0000 UTC m=+1020.116537124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc") : secret "default-prometheus-proxy-tls" not found Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.807108 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.807356 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.810864 5123 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.810911 5123 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e17c6ae451cf0bafcfd6a99fdd455784004b88b206bdd06b9fb02efdb97580cf/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.813976 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-web-config\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.814070 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-config\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.815460 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-config-out\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.815518 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.816450 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-tls-assets\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.830925 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-skzkb\" (UniqueName: \"kubernetes.io/projected/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-kube-api-access-skzkb\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:30 crc kubenswrapper[5123]: I1212 15:36:30.851853 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ff74bc9-42d1-44e3-bb30-c4ad6150ce34\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:31 crc kubenswrapper[5123]: I1212 15:36:31.314206 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:31 crc kubenswrapper[5123]: E1212 15:36:31.314381 5123 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 12 15:36:31 crc kubenswrapper[5123]: E1212 15:36:31.314459 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls podName:1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc nodeName:}" failed. No retries permitted until 2025-12-12 15:36:32.314441415 +0000 UTC m=+1021.124393926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc") : secret "default-prometheus-proxy-tls" not found Dec 12 15:36:32 crc kubenswrapper[5123]: I1212 15:36:32.373547 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:32 crc kubenswrapper[5123]: I1212 15:36:32.378791 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc\") " pod="service-telemetry/prometheus-default-0" Dec 12 15:36:32 crc kubenswrapper[5123]: I1212 15:36:32.628062 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-c5cl9\"" Dec 12 15:36:32 crc kubenswrapper[5123]: I1212 15:36:32.635587 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 12 15:36:33 crc kubenswrapper[5123]: I1212 15:36:33.007785 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 12 15:36:33 crc kubenswrapper[5123]: I1212 15:36:33.076048 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc","Type":"ContainerStarted","Data":"af54807e00817673c7eb344c3809c02e81aa279fbb296d3f0266098b65ff338f"} Dec 12 15:36:39 crc kubenswrapper[5123]: I1212 15:36:39.130882 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc","Type":"ContainerStarted","Data":"e7201e85ca6d7ecb050c5241443cdb02351c3267ab7e11ae2f9eec51cfd57cf2"} Dec 12 15:36:43 crc kubenswrapper[5123]: I1212 15:36:43.428421 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt"] Dec 12 15:36:43 crc kubenswrapper[5123]: I1212 15:36:43.455325 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt"] Dec 12 15:36:43 crc kubenswrapper[5123]: I1212 15:36:43.455533 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" Dec 12 15:36:43 crc kubenswrapper[5123]: I1212 15:36:43.622798 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8p77\" (UniqueName: \"kubernetes.io/projected/e5283dbb-de4c-44cb-9918-e86c990dd7c7-kube-api-access-t8p77\") pod \"default-snmp-webhook-6774d8dfbc-lrhwt\" (UID: \"e5283dbb-de4c-44cb-9918-e86c990dd7c7\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" Dec 12 15:36:43 crc kubenswrapper[5123]: I1212 15:36:43.726696 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t8p77\" (UniqueName: \"kubernetes.io/projected/e5283dbb-de4c-44cb-9918-e86c990dd7c7-kube-api-access-t8p77\") pod \"default-snmp-webhook-6774d8dfbc-lrhwt\" (UID: \"e5283dbb-de4c-44cb-9918-e86c990dd7c7\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" Dec 12 15:36:43 crc kubenswrapper[5123]: I1212 15:36:43.749039 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8p77\" (UniqueName: \"kubernetes.io/projected/e5283dbb-de4c-44cb-9918-e86c990dd7c7-kube-api-access-t8p77\") pod \"default-snmp-webhook-6774d8dfbc-lrhwt\" (UID: \"e5283dbb-de4c-44cb-9918-e86c990dd7c7\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" Dec 12 15:36:43 crc kubenswrapper[5123]: I1212 15:36:43.779132 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" Dec 12 15:36:44 crc kubenswrapper[5123]: I1212 15:36:44.237121 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt"] Dec 12 15:36:44 crc kubenswrapper[5123]: W1212 15:36:44.237173 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5283dbb_de4c_44cb_9918_e86c990dd7c7.slice/crio-ce84400df6d0fcdd717dfbe074f93a586e03005cba7b7a30f378641793aa6492 WatchSource:0}: Error finding container ce84400df6d0fcdd717dfbe074f93a586e03005cba7b7a30f378641793aa6492: Status 404 returned error can't find the container with id ce84400df6d0fcdd717dfbe074f93a586e03005cba7b7a30f378641793aa6492 Dec 12 15:36:45 crc kubenswrapper[5123]: I1212 15:36:45.187396 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" event={"ID":"e5283dbb-de4c-44cb-9918-e86c990dd7c7","Type":"ContainerStarted","Data":"ce84400df6d0fcdd717dfbe074f93a586e03005cba7b7a30f378641793aa6492"} Dec 12 15:36:47 crc kubenswrapper[5123]: I1212 15:36:47.205887 5123 generic.go:358] "Generic (PLEG): container finished" podID="1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc" containerID="e7201e85ca6d7ecb050c5241443cdb02351c3267ab7e11ae2f9eec51cfd57cf2" exitCode=0 Dec 12 15:36:47 crc kubenswrapper[5123]: I1212 15:36:47.206005 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc","Type":"ContainerDied","Data":"e7201e85ca6d7ecb050c5241443cdb02351c3267ab7e11ae2f9eec51cfd57cf2"} Dec 12 15:36:49 crc kubenswrapper[5123]: I1212 15:36:49.206702 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42258: no serving certificate available for the kubelet" Dec 12 15:36:51 crc kubenswrapper[5123]: I1212 15:36:51.870808 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 15:36:51 crc kubenswrapper[5123]: I1212 15:36:51.877794 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.070033 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.070084 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.070144 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.070364 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.070544 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.070556 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-q8n4t\"" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.087474 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.172396 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-web-config\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.172483 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.172792 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-config-volume\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.172872 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.172955 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39698c37-d56a-4795-a000-b3ac9ba16d50-tls-assets\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.173132 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.173175 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39698c37-d56a-4795-a000-b3ac9ba16d50-config-out\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.173199 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.173337 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bth\" (UniqueName: \"kubernetes.io/projected/39698c37-d56a-4795-a000-b3ac9ba16d50-kube-api-access-t2bth\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.274766 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-web-config\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.274821 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.274878 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-config-volume\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.274896 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.274913 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39698c37-d56a-4795-a000-b3ac9ba16d50-tls-assets\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.274957 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: E1212 15:36:52.275080 5123 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 15:36:52 crc kubenswrapper[5123]: E1212 15:36:52.275161 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls podName:39698c37-d56a-4795-a000-b3ac9ba16d50 nodeName:}" failed. No retries permitted until 2025-12-12 15:36:52.775140868 +0000 UTC m=+1041.585093379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "39698c37-d56a-4795-a000-b3ac9ba16d50") : secret "default-alertmanager-proxy-tls" not found Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.275177 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39698c37-d56a-4795-a000-b3ac9ba16d50-config-out\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.275203 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.275260 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2bth\" (UniqueName: \"kubernetes.io/projected/39698c37-d56a-4795-a000-b3ac9ba16d50-kube-api-access-t2bth\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.278850 5123 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.278901 5123 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2f36ea0ca04fe511a7b61b2591b8dd1d34cfe64688d249cd87e841c8cd57e811/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.282430 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.282954 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.283203 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39698c37-d56a-4795-a000-b3ac9ba16d50-config-out\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.283422 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-config-volume\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.294127 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39698c37-d56a-4795-a000-b3ac9ba16d50-tls-assets\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.294909 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-web-config\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.295594 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2bth\" (UniqueName: \"kubernetes.io/projected/39698c37-d56a-4795-a000-b3ac9ba16d50-kube-api-access-t2bth\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.317966 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa355fd7-aade-4d5b-a7d9-b257610f3770\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: I1212 15:36:52.783823 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:52 crc kubenswrapper[5123]: E1212 15:36:52.784103 5123 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 12 15:36:52 crc kubenswrapper[5123]: E1212 15:36:52.784404 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls podName:39698c37-d56a-4795-a000-b3ac9ba16d50 nodeName:}" failed. No retries permitted until 2025-12-12 15:36:53.784360434 +0000 UTC m=+1042.594312945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "39698c37-d56a-4795-a000-b3ac9ba16d50") : secret "default-alertmanager-proxy-tls" not found Dec 12 15:36:54 crc kubenswrapper[5123]: I1212 15:36:54.039757 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:54 crc kubenswrapper[5123]: I1212 15:36:54.054808 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/39698c37-d56a-4795-a000-b3ac9ba16d50-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"39698c37-d56a-4795-a000-b3ac9ba16d50\") " pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:54 crc kubenswrapper[5123]: I1212 15:36:54.360121 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 12 15:36:56 crc kubenswrapper[5123]: I1212 15:36:56.166895 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 12 15:36:56 crc kubenswrapper[5123]: I1212 15:36:56.514993 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" event={"ID":"e5283dbb-de4c-44cb-9918-e86c990dd7c7","Type":"ContainerStarted","Data":"b1218c5f2f5ee239a898b7c2af32b78726eaa882a341dbcabcc5c2a3c177ca92"} Dec 12 15:36:56 crc kubenswrapper[5123]: I1212 15:36:56.521131 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"39698c37-d56a-4795-a000-b3ac9ba16d50","Type":"ContainerStarted","Data":"7642161be99464dca471b9ff7ba6a345000c9abd59f3cde3b0c6cf3be76988a2"} Dec 12 15:36:56 crc kubenswrapper[5123]: I1212 15:36:56.542841 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-lrhwt" podStartSLOduration=2.01370709 podStartE2EDuration="13.542797551s" podCreationTimestamp="2025-12-12 15:36:43 +0000 UTC" firstStartedPulling="2025-12-12 15:36:44.240195833 +0000 UTC m=+1033.050148344" lastFinishedPulling="2025-12-12 15:36:55.769286294 +0000 UTC m=+1044.579238805" observedRunningTime="2025-12-12 15:36:56.541587833 +0000 UTC m=+1045.351540344" watchObservedRunningTime="2025-12-12 15:36:56.542797551 +0000 UTC m=+1045.352750062" Dec 12 15:37:01 crc kubenswrapper[5123]: I1212 15:37:01.045898 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"39698c37-d56a-4795-a000-b3ac9ba16d50","Type":"ContainerStarted","Data":"1f4361a51937ce5edc8a3dcbd81a6c77dcca8f88f13fcced3c2d1704637fcedf"} Dec 12 15:37:10 crc kubenswrapper[5123]: I1212 15:37:10.628748 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc","Type":"ContainerStarted","Data":"aca6ac39ec9ebe5917848cf074264494b4cbd8ab851e33d842e0a6a7279eeb8a"} Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.755995 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65"] Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.805535 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65"] Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.805736 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.808822 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.808868 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-jmrct\"" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.809550 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.809627 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.896440 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hk4w\" (UniqueName: \"kubernetes.io/projected/721dacd1-a3e2-4519-956f-566484659e0e-kube-api-access-2hk4w\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.896798 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/721dacd1-a3e2-4519-956f-566484659e0e-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.896908 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.896946 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.896988 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/721dacd1-a3e2-4519-956f-566484659e0e-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.998750 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.998808 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.998835 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/721dacd1-a3e2-4519-956f-566484659e0e-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.999091 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hk4w\" (UniqueName: \"kubernetes.io/projected/721dacd1-a3e2-4519-956f-566484659e0e-kube-api-access-2hk4w\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: E1212 15:37:11.999193 5123 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 15:37:11 crc kubenswrapper[5123]: E1212 15:37:11.999325 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls podName:721dacd1-a3e2-4519-956f-566484659e0e nodeName:}" failed. No retries permitted until 2025-12-12 15:37:12.499295001 +0000 UTC m=+1061.309247512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-k5f65" (UID: "721dacd1-a3e2-4519-956f-566484659e0e") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.999539 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/721dacd1-a3e2-4519-956f-566484659e0e-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:11 crc kubenswrapper[5123]: I1212 15:37:11.999907 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/721dacd1-a3e2-4519-956f-566484659e0e-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:12 crc kubenswrapper[5123]: I1212 15:37:12.000742 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/721dacd1-a3e2-4519-956f-566484659e0e-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:12 crc kubenswrapper[5123]: I1212 15:37:12.008186 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:12 crc kubenswrapper[5123]: I1212 15:37:12.021802 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hk4w\" (UniqueName: \"kubernetes.io/projected/721dacd1-a3e2-4519-956f-566484659e0e-kube-api-access-2hk4w\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:12 crc kubenswrapper[5123]: I1212 15:37:12.534298 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:12 crc kubenswrapper[5123]: E1212 15:37:12.534618 5123 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 15:37:12 crc kubenswrapper[5123]: E1212 15:37:12.534821 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls podName:721dacd1-a3e2-4519-956f-566484659e0e nodeName:}" failed. No retries permitted until 2025-12-12 15:37:13.534796193 +0000 UTC m=+1062.344748704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-k5f65" (UID: "721dacd1-a3e2-4519-956f-566484659e0e") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 12 15:37:12 crc kubenswrapper[5123]: I1212 15:37:12.645466 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc","Type":"ContainerStarted","Data":"e48bf16bed4fe13cf7c07542e8860edc64b0e7518151485edc07afe3cd002178"} Dec 12 15:37:13 crc kubenswrapper[5123]: I1212 15:37:13.555186 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:13 crc kubenswrapper[5123]: I1212 15:37:13.593130 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/721dacd1-a3e2-4519-956f-566484659e0e-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-k5f65\" (UID: \"721dacd1-a3e2-4519-956f-566484659e0e\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:13 crc kubenswrapper[5123]: I1212 15:37:13.624567 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" Dec 12 15:37:14 crc kubenswrapper[5123]: I1212 15:37:14.196733 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65"] Dec 12 15:37:14 crc kubenswrapper[5123]: I1212 15:37:14.682287 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerStarted","Data":"57cca5f58cc34697e37fce5f0f32eae8586360b03123197d618ada3cd849163a"} Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.432315 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5"] Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.442825 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5"] Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.443122 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.448588 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.449159 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.534164 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48vp\" (UniqueName: \"kubernetes.io/projected/9bfba062-362f-488e-b55f-4c32f4202fbd-kube-api-access-w48vp\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.534282 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9bfba062-362f-488e-b55f-4c32f4202fbd-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.534447 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.534641 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.534797 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9bfba062-362f-488e-b55f-4c32f4202fbd-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.636704 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9bfba062-362f-488e-b55f-4c32f4202fbd-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.636836 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w48vp\" (UniqueName: \"kubernetes.io/projected/9bfba062-362f-488e-b55f-4c32f4202fbd-kube-api-access-w48vp\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.636883 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9bfba062-362f-488e-b55f-4c32f4202fbd-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.636942 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.636998 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.637498 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9bfba062-362f-488e-b55f-4c32f4202fbd-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: E1212 15:37:17.637604 5123 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 15:37:17 crc kubenswrapper[5123]: E1212 15:37:17.637736 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls podName:9bfba062-362f-488e-b55f-4c32f4202fbd nodeName:}" failed. No retries permitted until 2025-12-12 15:37:18.137714638 +0000 UTC m=+1066.947667149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" (UID: "9bfba062-362f-488e-b55f-4c32f4202fbd") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.637988 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9bfba062-362f-488e-b55f-4c32f4202fbd-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.650317 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.660798 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48vp\" (UniqueName: \"kubernetes.io/projected/9bfba062-362f-488e-b55f-4c32f4202fbd-kube-api-access-w48vp\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.715812 5123 generic.go:358] "Generic (PLEG): container finished" podID="39698c37-d56a-4795-a000-b3ac9ba16d50" containerID="1f4361a51937ce5edc8a3dcbd81a6c77dcca8f88f13fcced3c2d1704637fcedf" exitCode=0 Dec 12 15:37:17 crc kubenswrapper[5123]: I1212 15:37:17.716001 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"39698c37-d56a-4795-a000-b3ac9ba16d50","Type":"ContainerDied","Data":"1f4361a51937ce5edc8a3dcbd81a6c77dcca8f88f13fcced3c2d1704637fcedf"} Dec 12 15:37:18 crc kubenswrapper[5123]: I1212 15:37:18.173398 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:18 crc kubenswrapper[5123]: E1212 15:37:18.173652 5123 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 15:37:18 crc kubenswrapper[5123]: E1212 15:37:18.173796 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls podName:9bfba062-362f-488e-b55f-4c32f4202fbd nodeName:}" failed. No retries permitted until 2025-12-12 15:37:19.173767407 +0000 UTC m=+1067.983719928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" (UID: "9bfba062-362f-488e-b55f-4c32f4202fbd") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 15:37:19 crc kubenswrapper[5123]: I1212 15:37:19.215388 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:19 crc kubenswrapper[5123]: E1212 15:37:19.215563 5123 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 15:37:19 crc kubenswrapper[5123]: E1212 15:37:19.215635 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls podName:9bfba062-362f-488e-b55f-4c32f4202fbd nodeName:}" failed. No retries permitted until 2025-12-12 15:37:21.215618904 +0000 UTC m=+1070.025571415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" (UID: "9bfba062-362f-488e-b55f-4c32f4202fbd") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 12 15:37:21 crc kubenswrapper[5123]: I1212 15:37:21.274165 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:21 crc kubenswrapper[5123]: I1212 15:37:21.282835 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9bfba062-362f-488e-b55f-4c32f4202fbd-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5\" (UID: \"9bfba062-362f-488e-b55f-4c32f4202fbd\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:21 crc kubenswrapper[5123]: I1212 15:37:21.381497 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" Dec 12 15:37:25 crc kubenswrapper[5123]: I1212 15:37:25.624360 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx"] Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.135920 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx"] Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.136287 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.143599 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.147630 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.225956 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5df439b4-be33-48c1-9337-76771db0e43f-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.226718 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzk74\" (UniqueName: \"kubernetes.io/projected/5df439b4-be33-48c1-9337-76771db0e43f-kube-api-access-gzk74\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.226851 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5df439b4-be33-48c1-9337-76771db0e43f-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.227079 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.227412 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.329144 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5df439b4-be33-48c1-9337-76771db0e43f-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.329260 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzk74\" (UniqueName: \"kubernetes.io/projected/5df439b4-be33-48c1-9337-76771db0e43f-kube-api-access-gzk74\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.329310 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5df439b4-be33-48c1-9337-76771db0e43f-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.329358 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.329439 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.330682 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5df439b4-be33-48c1-9337-76771db0e43f-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: E1212 15:37:27.331048 5123 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 15:37:27 crc kubenswrapper[5123]: E1212 15:37:27.331183 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls podName:5df439b4-be33-48c1-9337-76771db0e43f nodeName:}" failed. No retries permitted until 2025-12-12 15:37:27.831143798 +0000 UTC m=+1076.641096309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" (UID: "5df439b4-be33-48c1-9337-76771db0e43f") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.331348 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5df439b4-be33-48c1-9337-76771db0e43f-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.341527 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.356446 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzk74\" (UniqueName: \"kubernetes.io/projected/5df439b4-be33-48c1-9337-76771db0e43f-kube-api-access-gzk74\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: I1212 15:37:27.839091 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:27 crc kubenswrapper[5123]: E1212 15:37:27.839277 5123 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 15:37:27 crc kubenswrapper[5123]: E1212 15:37:27.839380 5123 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls podName:5df439b4-be33-48c1-9337-76771db0e43f nodeName:}" failed. No retries permitted until 2025-12-12 15:37:28.839341703 +0000 UTC m=+1077.649294214 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" (UID: "5df439b4-be33-48c1-9337-76771db0e43f") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 12 15:37:28 crc kubenswrapper[5123]: I1212 15:37:28.903173 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:28 crc kubenswrapper[5123]: I1212 15:37:28.909909 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df439b4-be33-48c1-9337-76771db0e43f-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx\" (UID: \"5df439b4-be33-48c1-9337-76771db0e43f\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:28 crc kubenswrapper[5123]: I1212 15:37:28.957990 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" Dec 12 15:37:29 crc kubenswrapper[5123]: I1212 15:37:29.601241 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5"] Dec 12 15:37:29 crc kubenswrapper[5123]: I1212 15:37:29.763467 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx"] Dec 12 15:37:30 crc kubenswrapper[5123]: I1212 15:37:30.196931 5123 ???:1] "http: TLS handshake error from 192.168.126.11:52824: no serving certificate available for the kubelet" Dec 12 15:37:30 crc kubenswrapper[5123]: I1212 15:37:30.531389 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerStarted","Data":"9f3b5f7466c54336ab24bfa9a65a56dfd39c3dba527f22176cfaff66f22bfc0d"} Dec 12 15:37:30 crc kubenswrapper[5123]: I1212 15:37:30.533847 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerStarted","Data":"6be675329d9f2fa0b59c05951e4823143ee2a2884accee0da7d4046a00cd7794"} Dec 12 15:37:30 crc kubenswrapper[5123]: I1212 15:37:30.902715 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:37:30 crc kubenswrapper[5123]: I1212 15:37:30.903035 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:37:34 crc kubenswrapper[5123]: I1212 15:37:34.990269 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6"] Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.613497 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6"] Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.613745 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.621068 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.626487 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.693015 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtwb8\" (UniqueName: \"kubernetes.io/projected/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-kube-api-access-gtwb8\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.693113 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.693158 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.693239 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.794679 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gtwb8\" (UniqueName: \"kubernetes.io/projected/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-kube-api-access-gtwb8\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.794813 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.796118 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.796315 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.797635 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.798006 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.814140 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.817696 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtwb8\" (UniqueName: \"kubernetes.io/projected/88fdb60b-3b9c-492f-af79-a20a7a2c9cf9-kube-api-access-gtwb8\") pod \"default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6\" (UID: \"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:35 crc kubenswrapper[5123]: I1212 15:37:35.952838 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" Dec 12 15:37:36 crc kubenswrapper[5123]: I1212 15:37:36.601385 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6"] Dec 12 15:37:36 crc kubenswrapper[5123]: W1212 15:37:36.614915 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88fdb60b_3b9c_492f_af79_a20a7a2c9cf9.slice/crio-c5219b59cb0b57ff15b54bbc24bdb0659cd9e1ed3cbd076b04d5acbd1700db32 WatchSource:0}: Error finding container c5219b59cb0b57ff15b54bbc24bdb0659cd9e1ed3cbd076b04d5acbd1700db32: Status 404 returned error can't find the container with id c5219b59cb0b57ff15b54bbc24bdb0659cd9e1ed3cbd076b04d5acbd1700db32 Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.046134 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6"] Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.549162 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6"] Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.549444 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.556165 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.611979 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a417a0f-c2cc-4dac-a05a-9e00b69fe8cc","Type":"ContainerStarted","Data":"021d9ab3855b1d9fa193cc13767aa89eb17fb162223811baa65a99e8bf3a5db1"} Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.621512 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerStarted","Data":"d424a800b63305662fee00a8896ac1cf1ace7572c42fb96bfeb20851bfa584c5"} Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.627448 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerStarted","Data":"78c2c393eb3803bfc3a935495d1b7e947610d40c80fa6f4fec901ef2b8493e43"} Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.633533 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" event={"ID":"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9","Type":"ContainerStarted","Data":"c5219b59cb0b57ff15b54bbc24bdb0659cd9e1ed3cbd076b04d5acbd1700db32"} Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.637562 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.649818 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=7.3970830450000005 podStartE2EDuration="1m8.649787534s" podCreationTimestamp="2025-12-12 15:36:29 +0000 UTC" firstStartedPulling="2025-12-12 15:36:33.014420392 +0000 UTC m=+1021.824372903" lastFinishedPulling="2025-12-12 15:37:34.267124881 +0000 UTC m=+1083.077077392" observedRunningTime="2025-12-12 15:37:37.642598445 +0000 UTC m=+1086.452550986" watchObservedRunningTime="2025-12-12 15:37:37.649787534 +0000 UTC m=+1086.459740045" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.723022 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgb2d\" (UniqueName: \"kubernetes.io/projected/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-kube-api-access-lgb2d\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.723249 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.723284 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.723332 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.824829 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.824893 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.824939 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.825047 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lgb2d\" (UniqueName: \"kubernetes.io/projected/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-kube-api-access-lgb2d\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.826664 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.827373 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.832550 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.847875 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgb2d\" (UniqueName: \"kubernetes.io/projected/c5c4f1a6-6160-4336-a241-db8aaa2bfc37-kube-api-access-lgb2d\") pod \"default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6\" (UID: \"c5c4f1a6-6160-4336-a241-db8aaa2bfc37\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:37 crc kubenswrapper[5123]: I1212 15:37:37.886597 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" Dec 12 15:37:38 crc kubenswrapper[5123]: I1212 15:37:38.166776 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6"] Dec 12 15:37:38 crc kubenswrapper[5123]: W1212 15:37:38.173296 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5c4f1a6_6160_4336_a241_db8aaa2bfc37.slice/crio-16855eb19ca974c3b743a9298ea6a50d932b25734ffaf4a039ad3c39343da72b WatchSource:0}: Error finding container 16855eb19ca974c3b743a9298ea6a50d932b25734ffaf4a039ad3c39343da72b: Status 404 returned error can't find the container with id 16855eb19ca974c3b743a9298ea6a50d932b25734ffaf4a039ad3c39343da72b Dec 12 15:37:38 crc kubenswrapper[5123]: I1212 15:37:38.647504 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" event={"ID":"c5c4f1a6-6160-4336-a241-db8aaa2bfc37","Type":"ContainerStarted","Data":"16855eb19ca974c3b743a9298ea6a50d932b25734ffaf4a039ad3c39343da72b"} Dec 12 15:37:39 crc kubenswrapper[5123]: I1212 15:37:39.663541 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"39698c37-d56a-4795-a000-b3ac9ba16d50","Type":"ContainerStarted","Data":"2895843a218624c7454d0288c53825db780bfa85c5f31b07691259f75dcb8194"} Dec 12 15:37:41 crc kubenswrapper[5123]: I1212 15:37:41.695102 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerStarted","Data":"baf8c0b7fec51ba3a573fd92d9dbaa74ba9bd6b1f6e06a43f80250634e3c2c40"} Dec 12 15:37:43 crc kubenswrapper[5123]: I1212 15:37:43.724040 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"39698c37-d56a-4795-a000-b3ac9ba16d50","Type":"ContainerStarted","Data":"125d7913c44254501fcf048c55db1fda3b379d7921e03fdb8d0ed3f6044d305c"} Dec 12 15:37:47 crc kubenswrapper[5123]: I1212 15:37:47.636229 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 12 15:37:47 crc kubenswrapper[5123]: I1212 15:37:47.684392 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 12 15:37:47 crc kubenswrapper[5123]: I1212 15:37:47.799691 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 12 15:37:48 crc kubenswrapper[5123]: I1212 15:37:48.770544 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerStarted","Data":"ff00c6b65d71d2402b21856da250431189b0ee2081f4c9392e65c88e435893f5"} Dec 12 15:37:48 crc kubenswrapper[5123]: I1212 15:37:48.772238 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" event={"ID":"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9","Type":"ContainerStarted","Data":"3482cc696719cd3d8434837799ed11330cc2872ea7c026576cf52890531071a8"} Dec 12 15:37:48 crc kubenswrapper[5123]: I1212 15:37:48.774246 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerStarted","Data":"4b693feca7fed22cf10da5c233fc58167c667ec36c3d9d702dbff840b4de9a2c"} Dec 12 15:37:48 crc kubenswrapper[5123]: I1212 15:37:48.777556 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"39698c37-d56a-4795-a000-b3ac9ba16d50","Type":"ContainerStarted","Data":"8f4d80e0c0f7225466958587c6343b4312702fac0e562c8ccad968dc40b08c7d"} Dec 12 15:37:48 crc kubenswrapper[5123]: I1212 15:37:48.781251 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" event={"ID":"c5c4f1a6-6160-4336-a241-db8aaa2bfc37","Type":"ContainerStarted","Data":"eb5362a9f548b28751f3dfff57514d3e209f18582cf82989e9f95780a7d49397"} Dec 12 15:37:48 crc kubenswrapper[5123]: I1212 15:37:48.783645 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerStarted","Data":"cdf0c5cf14b406a79156f985a52d45115e643f08cf50906a1e4678bac49ce895"} Dec 12 15:37:48 crc kubenswrapper[5123]: I1212 15:37:48.809313 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=28.581177426 podStartE2EDuration="58.809281256s" podCreationTimestamp="2025-12-12 15:36:50 +0000 UTC" firstStartedPulling="2025-12-12 15:37:17.718480435 +0000 UTC m=+1066.528432946" lastFinishedPulling="2025-12-12 15:37:47.946584265 +0000 UTC m=+1096.756536776" observedRunningTime="2025-12-12 15:37:48.803705529 +0000 UTC m=+1097.613658060" watchObservedRunningTime="2025-12-12 15:37:48.809281256 +0000 UTC m=+1097.619233757" Dec 12 15:37:51 crc kubenswrapper[5123]: I1212 15:37:51.291810 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-cdkq7"] Dec 12 15:37:51 crc kubenswrapper[5123]: I1212 15:37:51.292588 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" podUID="1409556c-b0dc-4a13-803c-aa74f048c7a2" containerName="default-interconnect" containerID="cri-o://ce52f4548cc859977688e4ed3d183c58dbc2319fe85bbea75ab9d50ba7a8fc19" gracePeriod=30 Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.845693 5123 generic.go:358] "Generic (PLEG): container finished" podID="9bfba062-362f-488e-b55f-4c32f4202fbd" containerID="4b693feca7fed22cf10da5c233fc58167c667ec36c3d9d702dbff840b4de9a2c" exitCode=0 Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.845787 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerDied","Data":"4b693feca7fed22cf10da5c233fc58167c667ec36c3d9d702dbff840b4de9a2c"} Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.847861 5123 generic.go:358] "Generic (PLEG): container finished" podID="1409556c-b0dc-4a13-803c-aa74f048c7a2" containerID="ce52f4548cc859977688e4ed3d183c58dbc2319fe85bbea75ab9d50ba7a8fc19" exitCode=0 Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.847914 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" event={"ID":"1409556c-b0dc-4a13-803c-aa74f048c7a2","Type":"ContainerDied","Data":"ce52f4548cc859977688e4ed3d183c58dbc2319fe85bbea75ab9d50ba7a8fc19"} Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.850329 5123 generic.go:358] "Generic (PLEG): container finished" podID="c5c4f1a6-6160-4336-a241-db8aaa2bfc37" containerID="eb5362a9f548b28751f3dfff57514d3e209f18582cf82989e9f95780a7d49397" exitCode=0 Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.850428 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" event={"ID":"c5c4f1a6-6160-4336-a241-db8aaa2bfc37","Type":"ContainerDied","Data":"eb5362a9f548b28751f3dfff57514d3e209f18582cf82989e9f95780a7d49397"} Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.853641 5123 generic.go:358] "Generic (PLEG): container finished" podID="721dacd1-a3e2-4519-956f-566484659e0e" containerID="cdf0c5cf14b406a79156f985a52d45115e643f08cf50906a1e4678bac49ce895" exitCode=0 Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.853718 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerDied","Data":"cdf0c5cf14b406a79156f985a52d45115e643f08cf50906a1e4678bac49ce895"} Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.855805 5123 generic.go:358] "Generic (PLEG): container finished" podID="5df439b4-be33-48c1-9337-76771db0e43f" containerID="ff00c6b65d71d2402b21856da250431189b0ee2081f4c9392e65c88e435893f5" exitCode=0 Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.855827 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerDied","Data":"ff00c6b65d71d2402b21856da250431189b0ee2081f4c9392e65c88e435893f5"} Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.858195 5123 generic.go:358] "Generic (PLEG): container finished" podID="88fdb60b-3b9c-492f-af79-a20a7a2c9cf9" containerID="3482cc696719cd3d8434837799ed11330cc2872ea7c026576cf52890531071a8" exitCode=0 Dec 12 15:37:52 crc kubenswrapper[5123]: I1212 15:37:52.858268 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" event={"ID":"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9","Type":"ContainerDied","Data":"3482cc696719cd3d8434837799ed11330cc2872ea7c026576cf52890531071a8"} Dec 12 15:37:53 crc kubenswrapper[5123]: I1212 15:37:53.978747 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.034579 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-ca\") pod \"1409556c-b0dc-4a13-803c-aa74f048c7a2\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.034642 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-users\") pod \"1409556c-b0dc-4a13-803c-aa74f048c7a2\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.034716 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-ca\") pod \"1409556c-b0dc-4a13-803c-aa74f048c7a2\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.034751 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-credentials\") pod \"1409556c-b0dc-4a13-803c-aa74f048c7a2\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.034783 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtds5\" (UniqueName: \"kubernetes.io/projected/1409556c-b0dc-4a13-803c-aa74f048c7a2-kube-api-access-gtds5\") pod \"1409556c-b0dc-4a13-803c-aa74f048c7a2\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.034822 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-credentials\") pod \"1409556c-b0dc-4a13-803c-aa74f048c7a2\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.034849 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-config\") pod \"1409556c-b0dc-4a13-803c-aa74f048c7a2\" (UID: \"1409556c-b0dc-4a13-803c-aa74f048c7a2\") " Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.036346 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "1409556c-b0dc-4a13-803c-aa74f048c7a2" (UID: "1409556c-b0dc-4a13-803c-aa74f048c7a2"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.044878 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "1409556c-b0dc-4a13-803c-aa74f048c7a2" (UID: "1409556c-b0dc-4a13-803c-aa74f048c7a2"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.052120 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-64vfp"] Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.053178 5123 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1409556c-b0dc-4a13-803c-aa74f048c7a2" containerName="default-interconnect" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.053213 5123 state_mem.go:107] "Deleted CPUSet assignment" podUID="1409556c-b0dc-4a13-803c-aa74f048c7a2" containerName="default-interconnect" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.053470 5123 memory_manager.go:356] "RemoveStaleState removing state" podUID="1409556c-b0dc-4a13-803c-aa74f048c7a2" containerName="default-interconnect" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.057060 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1409556c-b0dc-4a13-803c-aa74f048c7a2-kube-api-access-gtds5" (OuterVolumeSpecName: "kube-api-access-gtds5") pod "1409556c-b0dc-4a13-803c-aa74f048c7a2" (UID: "1409556c-b0dc-4a13-803c-aa74f048c7a2"). InnerVolumeSpecName "kube-api-access-gtds5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.066774 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "1409556c-b0dc-4a13-803c-aa74f048c7a2" (UID: "1409556c-b0dc-4a13-803c-aa74f048c7a2"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.069034 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.073648 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-64vfp"] Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.075183 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "1409556c-b0dc-4a13-803c-aa74f048c7a2" (UID: "1409556c-b0dc-4a13-803c-aa74f048c7a2"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.076303 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "1409556c-b0dc-4a13-803c-aa74f048c7a2" (UID: "1409556c-b0dc-4a13-803c-aa74f048c7a2"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.076437 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "1409556c-b0dc-4a13-803c-aa74f048c7a2" (UID: "1409556c-b0dc-4a13-803c-aa74f048c7a2"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137199 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvwxl\" (UniqueName: \"kubernetes.io/projected/dc068209-e8bf-4d2e-a370-3dec1bba5284-kube-api-access-rvwxl\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137319 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137367 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-sasl-users\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137472 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137534 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc068209-e8bf-4d2e-a370-3dec1bba5284-sasl-config\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137610 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137662 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137792 5123 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137819 5123 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137842 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gtds5\" (UniqueName: \"kubernetes.io/projected/1409556c-b0dc-4a13-803c-aa74f048c7a2-kube-api-access-gtds5\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137864 5123 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137889 5123 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137911 5123 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.137933 5123 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/1409556c-b0dc-4a13-803c-aa74f048c7a2-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.615446 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.615506 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc068209-e8bf-4d2e-a370-3dec1bba5284-sasl-config\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.615544 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.615586 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.615642 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvwxl\" (UniqueName: \"kubernetes.io/projected/dc068209-e8bf-4d2e-a370-3dec1bba5284-kube-api-access-rvwxl\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.615676 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.615692 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-sasl-users\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.619254 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc068209-e8bf-4d2e-a370-3dec1bba5284-sasl-config\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.620119 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-sasl-users\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.626179 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.627400 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.634144 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.644646 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc068209-e8bf-4d2e-a370-3dec1bba5284-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.652032 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvwxl\" (UniqueName: \"kubernetes.io/projected/dc068209-e8bf-4d2e-a370-3dec1bba5284-kube-api-access-rvwxl\") pod \"default-interconnect-55bf8d5cb-64vfp\" (UID: \"dc068209-e8bf-4d2e-a370-3dec1bba5284\") " pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.697618 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.880265 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" event={"ID":"c5c4f1a6-6160-4336-a241-db8aaa2bfc37","Type":"ContainerStarted","Data":"c6a9bbf8ce7f796e0d0afb64ce16eddca17eb6f193c4017108a03587de998808"} Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.880810 5123 scope.go:117] "RemoveContainer" containerID="eb5362a9f548b28751f3dfff57514d3e209f18582cf82989e9f95780a7d49397" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.883647 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerStarted","Data":"22788e2fa6e1859157da4f9bc940984960d9bf38db996f8be77e83d1cfc18943"} Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.884187 5123 scope.go:117] "RemoveContainer" containerID="cdf0c5cf14b406a79156f985a52d45115e643f08cf50906a1e4678bac49ce895" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.894746 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerStarted","Data":"1b79a0c811f9f0393b5c8adf2276485c8cf3b378097b8a0e712e1356b9f04b73"} Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.895642 5123 scope.go:117] "RemoveContainer" containerID="ff00c6b65d71d2402b21856da250431189b0ee2081f4c9392e65c88e435893f5" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.906201 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" event={"ID":"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9","Type":"ContainerStarted","Data":"c76c28e2078e19b5703f3c2f49a557b809c3d497fb55d94bff50d0a6d016f9f6"} Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.906963 5123 scope.go:117] "RemoveContainer" containerID="3482cc696719cd3d8434837799ed11330cc2872ea7c026576cf52890531071a8" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.912026 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerStarted","Data":"797333f5559d49318a2be6d5221f5f8f584edfbc00d4358d97e3869e039eb92a"} Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.913589 5123 scope.go:117] "RemoveContainer" containerID="4b693feca7fed22cf10da5c233fc58167c667ec36c3d9d702dbff840b4de9a2c" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.914758 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" event={"ID":"1409556c-b0dc-4a13-803c-aa74f048c7a2","Type":"ContainerDied","Data":"27cc93e95759c5f4dae19a679f87c2fe8de663aa5921311d468c538f541ea954"} Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.914814 5123 scope.go:117] "RemoveContainer" containerID="ce52f4548cc859977688e4ed3d183c58dbc2319fe85bbea75ab9d50ba7a8fc19" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.914958 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-cdkq7" Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.961671 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-64vfp"] Dec 12 15:37:54 crc kubenswrapper[5123]: W1212 15:37:54.966889 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc068209_e8bf_4d2e_a370_3dec1bba5284.slice/crio-1252f2453274a04f3173a0b1b6728b22871f417eeca4388607c9ad832a0a35eb WatchSource:0}: Error finding container 1252f2453274a04f3173a0b1b6728b22871f417eeca4388607c9ad832a0a35eb: Status 404 returned error can't find the container with id 1252f2453274a04f3173a0b1b6728b22871f417eeca4388607c9ad832a0a35eb Dec 12 15:37:54 crc kubenswrapper[5123]: I1212 15:37:54.999620 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-cdkq7"] Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.009290 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-cdkq7"] Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.652671 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1409556c-b0dc-4a13-803c-aa74f048c7a2" path="/var/lib/kubelet/pods/1409556c-b0dc-4a13-803c-aa74f048c7a2/volumes" Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.924724 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerStarted","Data":"3cdda28e649b6a3800e82bbabed0f75002af28c59a74927a6218452bea791b5e"} Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.927964 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" event={"ID":"c5c4f1a6-6160-4336-a241-db8aaa2bfc37","Type":"ContainerStarted","Data":"86cfa5c30361cbd03e2e58487d7f6db8c5d1633af4f82c1baaf24940c9ca1931"} Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.929013 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" event={"ID":"dc068209-e8bf-4d2e-a370-3dec1bba5284","Type":"ContainerStarted","Data":"f49e295643ff762d002733ee4b6944ca35249e88ed083ea17addda214bd65470"} Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.929161 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" event={"ID":"dc068209-e8bf-4d2e-a370-3dec1bba5284","Type":"ContainerStarted","Data":"1252f2453274a04f3173a0b1b6728b22871f417eeca4388607c9ad832a0a35eb"} Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.931346 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerStarted","Data":"b87f0e010d14531249b51cfbd1f7231c8fd34fb8532d47442e8dc7d548222de1"} Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.934218 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerStarted","Data":"9f40f19b6f1d7ed4aac5a4936b57d849733880ef333bdc6d0bd125dea9251ee9"} Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.936507 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" event={"ID":"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9","Type":"ContainerStarted","Data":"6543fb78b011a27694e9023e862e4cd4b692f352b5520bf4ca22c096b8831fc2"} Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.949036 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" podStartSLOduration=13.350889655 podStartE2EDuration="38.949016145s" podCreationTimestamp="2025-12-12 15:37:17 +0000 UTC" firstStartedPulling="2025-12-12 15:37:29.842822006 +0000 UTC m=+1078.652774517" lastFinishedPulling="2025-12-12 15:37:55.440948496 +0000 UTC m=+1104.250901007" observedRunningTime="2025-12-12 15:37:55.946668111 +0000 UTC m=+1104.756620612" watchObservedRunningTime="2025-12-12 15:37:55.949016145 +0000 UTC m=+1104.758968666" Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.990044 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" podStartSLOduration=4.002773082 podStartE2EDuration="44.990018988s" podCreationTimestamp="2025-12-12 15:37:11 +0000 UTC" firstStartedPulling="2025-12-12 15:37:14.378906691 +0000 UTC m=+1063.188859202" lastFinishedPulling="2025-12-12 15:37:55.366152597 +0000 UTC m=+1104.176105108" observedRunningTime="2025-12-12 15:37:55.973481202 +0000 UTC m=+1104.783433733" watchObservedRunningTime="2025-12-12 15:37:55.990018988 +0000 UTC m=+1104.799971489" Dec 12 15:37:55 crc kubenswrapper[5123]: I1212 15:37:55.999878 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" podStartSLOduration=2.8530054 podStartE2EDuration="19.999857681s" podCreationTimestamp="2025-12-12 15:37:36 +0000 UTC" firstStartedPulling="2025-12-12 15:37:38.175292048 +0000 UTC m=+1086.985244559" lastFinishedPulling="2025-12-12 15:37:55.322144329 +0000 UTC m=+1104.132096840" observedRunningTime="2025-12-12 15:37:55.997011191 +0000 UTC m=+1104.806963692" watchObservedRunningTime="2025-12-12 15:37:55.999857681 +0000 UTC m=+1104.809810192" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.023879 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" podStartSLOduration=5.427652366 podStartE2EDuration="31.023858044s" podCreationTimestamp="2025-12-12 15:37:25 +0000 UTC" firstStartedPulling="2025-12-12 15:37:29.84136557 +0000 UTC m=+1078.651318081" lastFinishedPulling="2025-12-12 15:37:55.437571248 +0000 UTC m=+1104.247523759" observedRunningTime="2025-12-12 15:37:56.017849114 +0000 UTC m=+1104.827801665" watchObservedRunningTime="2025-12-12 15:37:56.023858044 +0000 UTC m=+1104.833810555" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.045714 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" podStartSLOduration=3.123471995 podStartE2EDuration="22.045692899s" podCreationTimestamp="2025-12-12 15:37:34 +0000 UTC" firstStartedPulling="2025-12-12 15:37:36.617437299 +0000 UTC m=+1085.427389810" lastFinishedPulling="2025-12-12 15:37:55.539658203 +0000 UTC m=+1104.349610714" observedRunningTime="2025-12-12 15:37:56.043828949 +0000 UTC m=+1104.853781470" watchObservedRunningTime="2025-12-12 15:37:56.045692899 +0000 UTC m=+1104.855645410" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.071374 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-64vfp" podStartSLOduration=5.071350053 podStartE2EDuration="5.071350053s" podCreationTimestamp="2025-12-12 15:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:37:56.070507247 +0000 UTC m=+1104.880459768" watchObservedRunningTime="2025-12-12 15:37:56.071350053 +0000 UTC m=+1104.881302564" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.946797 5123 generic.go:358] "Generic (PLEG): container finished" podID="c5c4f1a6-6160-4336-a241-db8aaa2bfc37" containerID="86cfa5c30361cbd03e2e58487d7f6db8c5d1633af4f82c1baaf24940c9ca1931" exitCode=0 Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.946899 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" event={"ID":"c5c4f1a6-6160-4336-a241-db8aaa2bfc37","Type":"ContainerDied","Data":"86cfa5c30361cbd03e2e58487d7f6db8c5d1633af4f82c1baaf24940c9ca1931"} Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.946977 5123 scope.go:117] "RemoveContainer" containerID="eb5362a9f548b28751f3dfff57514d3e209f18582cf82989e9f95780a7d49397" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.947597 5123 scope.go:117] "RemoveContainer" containerID="86cfa5c30361cbd03e2e58487d7f6db8c5d1633af4f82c1baaf24940c9ca1931" Dec 12 15:37:56 crc kubenswrapper[5123]: E1212 15:37:56.948176 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6_service-telemetry(c5c4f1a6-6160-4336-a241-db8aaa2bfc37)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" podUID="c5c4f1a6-6160-4336-a241-db8aaa2bfc37" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.950520 5123 generic.go:358] "Generic (PLEG): container finished" podID="721dacd1-a3e2-4519-956f-566484659e0e" containerID="b87f0e010d14531249b51cfbd1f7231c8fd34fb8532d47442e8dc7d548222de1" exitCode=0 Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.950646 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerDied","Data":"b87f0e010d14531249b51cfbd1f7231c8fd34fb8532d47442e8dc7d548222de1"} Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.951139 5123 scope.go:117] "RemoveContainer" containerID="b87f0e010d14531249b51cfbd1f7231c8fd34fb8532d47442e8dc7d548222de1" Dec 12 15:37:56 crc kubenswrapper[5123]: E1212 15:37:56.951521 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-k5f65_service-telemetry(721dacd1-a3e2-4519-956f-566484659e0e)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" podUID="721dacd1-a3e2-4519-956f-566484659e0e" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.955408 5123 generic.go:358] "Generic (PLEG): container finished" podID="5df439b4-be33-48c1-9337-76771db0e43f" containerID="9f40f19b6f1d7ed4aac5a4936b57d849733880ef333bdc6d0bd125dea9251ee9" exitCode=0 Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.955553 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerDied","Data":"9f40f19b6f1d7ed4aac5a4936b57d849733880ef333bdc6d0bd125dea9251ee9"} Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.956321 5123 scope.go:117] "RemoveContainer" containerID="9f40f19b6f1d7ed4aac5a4936b57d849733880ef333bdc6d0bd125dea9251ee9" Dec 12 15:37:56 crc kubenswrapper[5123]: E1212 15:37:56.956829 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx_service-telemetry(5df439b4-be33-48c1-9337-76771db0e43f)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" podUID="5df439b4-be33-48c1-9337-76771db0e43f" Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.958836 5123 generic.go:358] "Generic (PLEG): container finished" podID="9bfba062-362f-488e-b55f-4c32f4202fbd" containerID="3cdda28e649b6a3800e82bbabed0f75002af28c59a74927a6218452bea791b5e" exitCode=0 Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.959088 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerDied","Data":"3cdda28e649b6a3800e82bbabed0f75002af28c59a74927a6218452bea791b5e"} Dec 12 15:37:56 crc kubenswrapper[5123]: I1212 15:37:56.960527 5123 scope.go:117] "RemoveContainer" containerID="3cdda28e649b6a3800e82bbabed0f75002af28c59a74927a6218452bea791b5e" Dec 12 15:37:56 crc kubenswrapper[5123]: E1212 15:37:56.960860 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5_service-telemetry(9bfba062-362f-488e-b55f-4c32f4202fbd)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" podUID="9bfba062-362f-488e-b55f-4c32f4202fbd" Dec 12 15:37:57 crc kubenswrapper[5123]: I1212 15:37:57.543630 5123 scope.go:117] "RemoveContainer" containerID="cdf0c5cf14b406a79156f985a52d45115e643f08cf50906a1e4678bac49ce895" Dec 12 15:37:57 crc kubenswrapper[5123]: I1212 15:37:57.594833 5123 scope.go:117] "RemoveContainer" containerID="ff00c6b65d71d2402b21856da250431189b0ee2081f4c9392e65c88e435893f5" Dec 12 15:37:57 crc kubenswrapper[5123]: I1212 15:37:57.639242 5123 scope.go:117] "RemoveContainer" containerID="4b693feca7fed22cf10da5c233fc58167c667ec36c3d9d702dbff840b4de9a2c" Dec 12 15:37:57 crc kubenswrapper[5123]: I1212 15:37:57.990341 5123 scope.go:117] "RemoveContainer" containerID="86cfa5c30361cbd03e2e58487d7f6db8c5d1633af4f82c1baaf24940c9ca1931" Dec 12 15:37:57 crc kubenswrapper[5123]: E1212 15:37:57.990681 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6_service-telemetry(c5c4f1a6-6160-4336-a241-db8aaa2bfc37)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" podUID="c5c4f1a6-6160-4336-a241-db8aaa2bfc37" Dec 12 15:37:57 crc kubenswrapper[5123]: I1212 15:37:57.997354 5123 scope.go:117] "RemoveContainer" containerID="b87f0e010d14531249b51cfbd1f7231c8fd34fb8532d47442e8dc7d548222de1" Dec 12 15:37:57 crc kubenswrapper[5123]: E1212 15:37:57.997831 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-k5f65_service-telemetry(721dacd1-a3e2-4519-956f-566484659e0e)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" podUID="721dacd1-a3e2-4519-956f-566484659e0e" Dec 12 15:37:58 crc kubenswrapper[5123]: I1212 15:37:58.002754 5123 scope.go:117] "RemoveContainer" containerID="9f40f19b6f1d7ed4aac5a4936b57d849733880ef333bdc6d0bd125dea9251ee9" Dec 12 15:37:58 crc kubenswrapper[5123]: E1212 15:37:58.003163 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx_service-telemetry(5df439b4-be33-48c1-9337-76771db0e43f)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" podUID="5df439b4-be33-48c1-9337-76771db0e43f" Dec 12 15:37:58 crc kubenswrapper[5123]: I1212 15:37:58.005042 5123 generic.go:358] "Generic (PLEG): container finished" podID="88fdb60b-3b9c-492f-af79-a20a7a2c9cf9" containerID="6543fb78b011a27694e9023e862e4cd4b692f352b5520bf4ca22c096b8831fc2" exitCode=0 Dec 12 15:37:58 crc kubenswrapper[5123]: I1212 15:37:58.005292 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" event={"ID":"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9","Type":"ContainerDied","Data":"6543fb78b011a27694e9023e862e4cd4b692f352b5520bf4ca22c096b8831fc2"} Dec 12 15:37:58 crc kubenswrapper[5123]: I1212 15:37:58.005339 5123 scope.go:117] "RemoveContainer" containerID="3482cc696719cd3d8434837799ed11330cc2872ea7c026576cf52890531071a8" Dec 12 15:37:58 crc kubenswrapper[5123]: I1212 15:37:58.005952 5123 scope.go:117] "RemoveContainer" containerID="6543fb78b011a27694e9023e862e4cd4b692f352b5520bf4ca22c096b8831fc2" Dec 12 15:37:58 crc kubenswrapper[5123]: E1212 15:37:58.006462 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6_service-telemetry(88fdb60b-3b9c-492f-af79-a20a7a2c9cf9)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" podUID="88fdb60b-3b9c-492f-af79-a20a7a2c9cf9" Dec 12 15:37:58 crc kubenswrapper[5123]: I1212 15:37:58.011509 5123 scope.go:117] "RemoveContainer" containerID="3cdda28e649b6a3800e82bbabed0f75002af28c59a74927a6218452bea791b5e" Dec 12 15:37:58 crc kubenswrapper[5123]: E1212 15:37:58.011903 5123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5_service-telemetry(9bfba062-362f-488e-b55f-4c32f4202fbd)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" podUID="9bfba062-362f-488e-b55f-4c32f4202fbd" Dec 12 15:38:00 crc kubenswrapper[5123]: I1212 15:38:00.902206 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:38:00 crc kubenswrapper[5123]: I1212 15:38:00.902823 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:38:09 crc kubenswrapper[5123]: I1212 15:38:09.640926 5123 scope.go:117] "RemoveContainer" containerID="86cfa5c30361cbd03e2e58487d7f6db8c5d1633af4f82c1baaf24940c9ca1931" Dec 12 15:38:09 crc kubenswrapper[5123]: I1212 15:38:09.641818 5123 scope.go:117] "RemoveContainer" containerID="6543fb78b011a27694e9023e862e4cd4b692f352b5520bf4ca22c096b8831fc2" Dec 12 15:38:10 crc kubenswrapper[5123]: I1212 15:38:10.641836 5123 scope.go:117] "RemoveContainer" containerID="b87f0e010d14531249b51cfbd1f7231c8fd34fb8532d47442e8dc7d548222de1" Dec 12 15:38:11 crc kubenswrapper[5123]: I1212 15:38:11.479970 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-64b75c6bfc-s9sd6" event={"ID":"88fdb60b-3b9c-492f-af79-a20a7a2c9cf9","Type":"ContainerStarted","Data":"f93be56b3df7ed3adaec20fb7a524e256d0df862457246486b373d66fac6808f"} Dec 12 15:38:11 crc kubenswrapper[5123]: I1212 15:38:11.484294 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-75f7dbb489-s6mq6" event={"ID":"c5c4f1a6-6160-4336-a241-db8aaa2bfc37","Type":"ContainerStarted","Data":"9e250f244da5fe855b5bc590ef6a1e2462fc26931be827a30a9bd12f85f97f4b"} Dec 12 15:38:11 crc kubenswrapper[5123]: I1212 15:38:11.488764 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-k5f65" event={"ID":"721dacd1-a3e2-4519-956f-566484659e0e","Type":"ContainerStarted","Data":"e9f64daa503953f173030e01d95bafacad87ff1106f458d1df4dc3704ee76ad0"} Dec 12 15:38:13 crc kubenswrapper[5123]: I1212 15:38:13.640300 5123 scope.go:117] "RemoveContainer" containerID="9f40f19b6f1d7ed4aac5a4936b57d849733880ef333bdc6d0bd125dea9251ee9" Dec 12 15:38:13 crc kubenswrapper[5123]: I1212 15:38:13.641969 5123 scope.go:117] "RemoveContainer" containerID="3cdda28e649b6a3800e82bbabed0f75002af28c59a74927a6218452bea791b5e" Dec 12 15:38:15 crc kubenswrapper[5123]: I1212 15:38:15.705125 5123 ???:1] "http: TLS handshake error from 192.168.126.11:37510: no serving certificate available for the kubelet" Dec 12 15:38:16 crc kubenswrapper[5123]: I1212 15:38:16.125404 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44946: no serving certificate available for the kubelet" Dec 12 15:38:16 crc kubenswrapper[5123]: I1212 15:38:16.732110 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-9frfx" event={"ID":"5df439b4-be33-48c1-9337-76771db0e43f","Type":"ContainerStarted","Data":"57741d99152bd100cca1fc89079234238ad9da65a3fb37d2518409c07511bd2c"} Dec 12 15:38:16 crc kubenswrapper[5123]: I1212 15:38:16.737939 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-85xc5" event={"ID":"9bfba062-362f-488e-b55f-4c32f4202fbd","Type":"ContainerStarted","Data":"5979e7678c2a5996f698d6c14e06abc31cfb91e336bf8bdf9d3e365f4ba4179c"} Dec 12 15:38:30 crc kubenswrapper[5123]: I1212 15:38:30.902634 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:38:30 crc kubenswrapper[5123]: I1212 15:38:30.903403 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:38:30 crc kubenswrapper[5123]: I1212 15:38:30.903523 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:38:30 crc kubenswrapper[5123]: I1212 15:38:30.904443 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2cf9081a67059ac5a079b8f43fd2aed11cbd262496baea709c4ede2e91cdc0e"} pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:38:30 crc kubenswrapper[5123]: I1212 15:38:30.904500 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" containerID="cri-o://c2cf9081a67059ac5a079b8f43fd2aed11cbd262496baea709c4ede2e91cdc0e" gracePeriod=600 Dec 12 15:38:31 crc kubenswrapper[5123]: I1212 15:38:31.452911 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerDied","Data":"c2cf9081a67059ac5a079b8f43fd2aed11cbd262496baea709c4ede2e91cdc0e"} Dec 12 15:38:31 crc kubenswrapper[5123]: I1212 15:38:31.453013 5123 scope.go:117] "RemoveContainer" containerID="b8b31bee9a490187d699071ec78132456a8a603d815d3195aabc642b4b346b89" Dec 12 15:38:31 crc kubenswrapper[5123]: I1212 15:38:31.452842 5123 generic.go:358] "Generic (PLEG): container finished" podID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerID="c2cf9081a67059ac5a079b8f43fd2aed11cbd262496baea709c4ede2e91cdc0e" exitCode=0 Dec 12 15:38:32 crc kubenswrapper[5123]: I1212 15:38:32.465212 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"83cd9799bca9d398afc04a6802de0b3b4e904da201b7f8be51bf382c9e373922"} Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.226063 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g7l9q/must-gather-n767c"] Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.234661 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.237079 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-g7l9q\"/\"openshift-service-ca.crt\"" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.242171 5123 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-g7l9q\"/\"default-dockercfg-f8cn9\"" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.242302 5123 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-g7l9q\"/\"kube-root-ca.crt\"" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.249794 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g7l9q/must-gather-n767c"] Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.546969 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c6003a4-c201-4867-bba6-0a8d4c275b40-must-gather-output\") pod \"must-gather-n767c\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.547038 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sk8f\" (UniqueName: \"kubernetes.io/projected/3c6003a4-c201-4867-bba6-0a8d4c275b40-kube-api-access-8sk8f\") pod \"must-gather-n767c\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.648096 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c6003a4-c201-4867-bba6-0a8d4c275b40-must-gather-output\") pod \"must-gather-n767c\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.648145 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8sk8f\" (UniqueName: \"kubernetes.io/projected/3c6003a4-c201-4867-bba6-0a8d4c275b40-kube-api-access-8sk8f\") pod \"must-gather-n767c\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.648552 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c6003a4-c201-4867-bba6-0a8d4c275b40-must-gather-output\") pod \"must-gather-n767c\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.684966 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sk8f\" (UniqueName: \"kubernetes.io/projected/3c6003a4-c201-4867-bba6-0a8d4c275b40-kube-api-access-8sk8f\") pod \"must-gather-n767c\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:42 crc kubenswrapper[5123]: I1212 15:38:42.857033 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:38:43 crc kubenswrapper[5123]: I1212 15:38:43.218604 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g7l9q/must-gather-n767c"] Dec 12 15:38:43 crc kubenswrapper[5123]: I1212 15:38:43.550985 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g7l9q/must-gather-n767c" event={"ID":"3c6003a4-c201-4867-bba6-0a8d4c275b40","Type":"ContainerStarted","Data":"aab06fa8586bfbecf1379e8e85943781f76230da7094e9136dcf9d2b7e2785bb"} Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.063964 5123 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-jvfhs"] Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.198241 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jvfhs"] Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.198509 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.377633 5123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67dpq\" (UniqueName: \"kubernetes.io/projected/b50e14fa-5079-4c0c-968d-2fdf4f42b633-kube-api-access-67dpq\") pod \"infrawatch-operators-jvfhs\" (UID: \"b50e14fa-5079-4c0c-968d-2fdf4f42b633\") " pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.479834 5123 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67dpq\" (UniqueName: \"kubernetes.io/projected/b50e14fa-5079-4c0c-968d-2fdf4f42b633-kube-api-access-67dpq\") pod \"infrawatch-operators-jvfhs\" (UID: \"b50e14fa-5079-4c0c-968d-2fdf4f42b633\") " pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.511763 5123 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67dpq\" (UniqueName: \"kubernetes.io/projected/b50e14fa-5079-4c0c-968d-2fdf4f42b633-kube-api-access-67dpq\") pod \"infrawatch-operators-jvfhs\" (UID: \"b50e14fa-5079-4c0c-968d-2fdf4f42b633\") " pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.515890 5123 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.808034 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g7l9q/must-gather-n767c" event={"ID":"3c6003a4-c201-4867-bba6-0a8d4c275b40","Type":"ContainerStarted","Data":"f770c6ceac1ea7d93be38233c074f022b1c2d1528fb9f993e774ba61970748ad"} Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.808454 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g7l9q/must-gather-n767c" event={"ID":"3c6003a4-c201-4867-bba6-0a8d4c275b40","Type":"ContainerStarted","Data":"0e56db2ba140453666facd1f117d2129a0f0ca66714258f7eb17fff373a75bf5"} Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.828124 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-g7l9q/must-gather-n767c" podStartSLOduration=2.49624084 podStartE2EDuration="9.828079876s" podCreationTimestamp="2025-12-12 15:38:42 +0000 UTC" firstStartedPulling="2025-12-12 15:38:43.229852958 +0000 UTC m=+1152.039805469" lastFinishedPulling="2025-12-12 15:38:50.561691994 +0000 UTC m=+1159.371644505" observedRunningTime="2025-12-12 15:38:51.825119321 +0000 UTC m=+1160.635071852" watchObservedRunningTime="2025-12-12 15:38:51.828079876 +0000 UTC m=+1160.638032387" Dec 12 15:38:51 crc kubenswrapper[5123]: I1212 15:38:51.986272 5123 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jvfhs"] Dec 12 15:38:52 crc kubenswrapper[5123]: W1212 15:38:52.000128 5123 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb50e14fa_5079_4c0c_968d_2fdf4f42b633.slice/crio-06bbba53cfd01ade721878271a05f23c1dfbc2a73348673774e9962b86a8de08 WatchSource:0}: Error finding container 06bbba53cfd01ade721878271a05f23c1dfbc2a73348673774e9962b86a8de08: Status 404 returned error can't find the container with id 06bbba53cfd01ade721878271a05f23c1dfbc2a73348673774e9962b86a8de08 Dec 12 15:38:52 crc kubenswrapper[5123]: I1212 15:38:52.157081 5123 ???:1] "http: TLS handshake error from 192.168.126.11:38604: no serving certificate available for the kubelet" Dec 12 15:38:52 crc kubenswrapper[5123]: I1212 15:38:52.216839 5123 ???:1] "http: TLS handshake error from 192.168.126.11:38606: no serving certificate available for the kubelet" Dec 12 15:38:52 crc kubenswrapper[5123]: I1212 15:38:52.819323 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvfhs" event={"ID":"b50e14fa-5079-4c0c-968d-2fdf4f42b633","Type":"ContainerStarted","Data":"06bbba53cfd01ade721878271a05f23c1dfbc2a73348673774e9962b86a8de08"} Dec 12 15:38:53 crc kubenswrapper[5123]: I1212 15:38:53.828809 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvfhs" event={"ID":"b50e14fa-5079-4c0c-968d-2fdf4f42b633","Type":"ContainerStarted","Data":"0117434eb97ebcf706348253aced063e0679ae25d9d72995cf01210f86e60f80"} Dec 12 15:38:53 crc kubenswrapper[5123]: I1212 15:38:53.852213 5123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-jvfhs" podStartSLOduration=2.174787581 podStartE2EDuration="2.852194857s" podCreationTimestamp="2025-12-12 15:38:51 +0000 UTC" firstStartedPulling="2025-12-12 15:38:52.002260413 +0000 UTC m=+1160.812212924" lastFinishedPulling="2025-12-12 15:38:52.679667689 +0000 UTC m=+1161.489620200" observedRunningTime="2025-12-12 15:38:53.847131305 +0000 UTC m=+1162.657083836" watchObservedRunningTime="2025-12-12 15:38:53.852194857 +0000 UTC m=+1162.662147368" Dec 12 15:39:01 crc kubenswrapper[5123]: I1212 15:39:01.516261 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:39:01 crc kubenswrapper[5123]: I1212 15:39:01.517144 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:39:01 crc kubenswrapper[5123]: I1212 15:39:01.560494 5123 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:39:01 crc kubenswrapper[5123]: I1212 15:39:01.983740 5123 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:39:02 crc kubenswrapper[5123]: I1212 15:39:02.118895 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jvfhs"] Dec 12 15:39:03 crc kubenswrapper[5123]: I1212 15:39:03.967352 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-jvfhs" podUID="b50e14fa-5079-4c0c-968d-2fdf4f42b633" containerName="registry-server" containerID="cri-o://0117434eb97ebcf706348253aced063e0679ae25d9d72995cf01210f86e60f80" gracePeriod=2 Dec 12 15:39:04 crc kubenswrapper[5123]: I1212 15:39:04.978917 5123 generic.go:358] "Generic (PLEG): container finished" podID="b50e14fa-5079-4c0c-968d-2fdf4f42b633" containerID="0117434eb97ebcf706348253aced063e0679ae25d9d72995cf01210f86e60f80" exitCode=0 Dec 12 15:39:04 crc kubenswrapper[5123]: I1212 15:39:04.979613 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvfhs" event={"ID":"b50e14fa-5079-4c0c-968d-2fdf4f42b633","Type":"ContainerDied","Data":"0117434eb97ebcf706348253aced063e0679ae25d9d72995cf01210f86e60f80"} Dec 12 15:39:05 crc kubenswrapper[5123]: I1212 15:39:05.037634 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:39:05 crc kubenswrapper[5123]: I1212 15:39:05.137064 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67dpq\" (UniqueName: \"kubernetes.io/projected/b50e14fa-5079-4c0c-968d-2fdf4f42b633-kube-api-access-67dpq\") pod \"b50e14fa-5079-4c0c-968d-2fdf4f42b633\" (UID: \"b50e14fa-5079-4c0c-968d-2fdf4f42b633\") " Dec 12 15:39:05 crc kubenswrapper[5123]: I1212 15:39:05.144592 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b50e14fa-5079-4c0c-968d-2fdf4f42b633-kube-api-access-67dpq" (OuterVolumeSpecName: "kube-api-access-67dpq") pod "b50e14fa-5079-4c0c-968d-2fdf4f42b633" (UID: "b50e14fa-5079-4c0c-968d-2fdf4f42b633"). InnerVolumeSpecName "kube-api-access-67dpq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:39:05 crc kubenswrapper[5123]: I1212 15:39:05.238979 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-67dpq\" (UniqueName: \"kubernetes.io/projected/b50e14fa-5079-4c0c-968d-2fdf4f42b633-kube-api-access-67dpq\") on node \"crc\" DevicePath \"\"" Dec 12 15:39:05 crc kubenswrapper[5123]: I1212 15:39:05.988786 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvfhs" event={"ID":"b50e14fa-5079-4c0c-968d-2fdf4f42b633","Type":"ContainerDied","Data":"06bbba53cfd01ade721878271a05f23c1dfbc2a73348673774e9962b86a8de08"} Dec 12 15:39:05 crc kubenswrapper[5123]: I1212 15:39:05.988877 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvfhs" Dec 12 15:39:05 crc kubenswrapper[5123]: I1212 15:39:05.989155 5123 scope.go:117] "RemoveContainer" containerID="0117434eb97ebcf706348253aced063e0679ae25d9d72995cf01210f86e60f80" Dec 12 15:39:06 crc kubenswrapper[5123]: I1212 15:39:06.014423 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jvfhs"] Dec 12 15:39:06 crc kubenswrapper[5123]: I1212 15:39:06.018975 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-jvfhs"] Dec 12 15:39:07 crc kubenswrapper[5123]: I1212 15:39:07.661394 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b50e14fa-5079-4c0c-968d-2fdf4f42b633" path="/var/lib/kubelet/pods/b50e14fa-5079-4c0c-968d-2fdf4f42b633/volumes" Dec 12 15:39:33 crc kubenswrapper[5123]: I1212 15:39:33.261776 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:39:33 crc kubenswrapper[5123]: I1212 15:39:33.261921 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-5777786469-9j9pt_2c1e4fb9-bde9-46df-8ac0-c0b457ca767f/openshift-config-operator/0.log" Dec 12 15:39:33 crc kubenswrapper[5123]: I1212 15:39:33.273717 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27rm2_3ef15793-fa49-4c37-a355-d4573977e301/kube-multus/0.log" Dec 12 15:39:33 crc kubenswrapper[5123]: I1212 15:39:33.274241 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27rm2_3ef15793-fa49-4c37-a355-d4573977e301/kube-multus/0.log" Dec 12 15:39:33 crc kubenswrapper[5123]: I1212 15:39:33.282053 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:39:33 crc kubenswrapper[5123]: I1212 15:39:33.282379 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 15:39:41 crc kubenswrapper[5123]: I1212 15:39:41.798013 5123 ???:1] "http: TLS handshake error from 192.168.126.11:40100: no serving certificate available for the kubelet" Dec 12 15:39:41 crc kubenswrapper[5123]: I1212 15:39:41.952521 5123 ???:1] "http: TLS handshake error from 192.168.126.11:40104: no serving certificate available for the kubelet" Dec 12 15:39:41 crc kubenswrapper[5123]: I1212 15:39:41.983149 5123 ???:1] "http: TLS handshake error from 192.168.126.11:40120: no serving certificate available for the kubelet" Dec 12 15:39:56 crc kubenswrapper[5123]: I1212 15:39:56.773146 5123 ???:1] "http: TLS handshake error from 192.168.126.11:52108: no serving certificate available for the kubelet" Dec 12 15:39:56 crc kubenswrapper[5123]: I1212 15:39:56.953383 5123 ???:1] "http: TLS handshake error from 192.168.126.11:52120: no serving certificate available for the kubelet" Dec 12 15:39:57 crc kubenswrapper[5123]: I1212 15:39:57.014374 5123 ???:1] "http: TLS handshake error from 192.168.126.11:52134: no serving certificate available for the kubelet" Dec 12 15:40:16 crc kubenswrapper[5123]: I1212 15:40:16.827389 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44868: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.058032 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44882: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.062818 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44896: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.095358 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44898: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.439130 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44908: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.456678 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44912: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.514089 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44926: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.671554 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44930: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.983664 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44940: no serving certificate available for the kubelet" Dec 12 15:40:17 crc kubenswrapper[5123]: I1212 15:40:17.999548 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44956: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.000535 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44958: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.216112 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44978: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.216824 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44970: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.238445 5123 ???:1] "http: TLS handshake error from 192.168.126.11:44988: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.421066 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45004: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.664082 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45016: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.683303 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45024: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.701152 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45040: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.879073 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45056: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.918235 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45082: no serving certificate available for the kubelet" Dec 12 15:40:18 crc kubenswrapper[5123]: I1212 15:40:18.918904 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45066: no serving certificate available for the kubelet" Dec 12 15:40:19 crc kubenswrapper[5123]: I1212 15:40:19.189881 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45086: no serving certificate available for the kubelet" Dec 12 15:40:19 crc kubenswrapper[5123]: I1212 15:40:19.404436 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45096: no serving certificate available for the kubelet" Dec 12 15:40:19 crc kubenswrapper[5123]: I1212 15:40:19.404910 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45108: no serving certificate available for the kubelet" Dec 12 15:40:19 crc kubenswrapper[5123]: I1212 15:40:19.412949 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45124: no serving certificate available for the kubelet" Dec 12 15:40:19 crc kubenswrapper[5123]: I1212 15:40:19.812813 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45136: no serving certificate available for the kubelet" Dec 12 15:40:19 crc kubenswrapper[5123]: I1212 15:40:19.819152 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45148: no serving certificate available for the kubelet" Dec 12 15:40:19 crc kubenswrapper[5123]: I1212 15:40:19.880190 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45160: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.036645 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45174: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.381695 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45180: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.384744 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45196: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.415528 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45198: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.598089 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45212: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.605892 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45220: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.689828 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45236: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.691413 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45238: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.845417 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45254: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.864518 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45256: no serving certificate available for the kubelet" Dec 12 15:40:20 crc kubenswrapper[5123]: I1212 15:40:20.875907 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45260: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.162416 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45274: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.163301 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45288: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.194306 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45290: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.365084 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45304: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.477911 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45310: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.684265 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45320: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.692520 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45332: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.743361 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45346: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.912198 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45352: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.929227 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45354: no serving certificate available for the kubelet" Dec 12 15:40:21 crc kubenswrapper[5123]: I1212 15:40:21.950428 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45364: no serving certificate available for the kubelet" Dec 12 15:40:34 crc kubenswrapper[5123]: I1212 15:40:34.315025 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45508: no serving certificate available for the kubelet" Dec 12 15:40:34 crc kubenswrapper[5123]: I1212 15:40:34.493465 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45520: no serving certificate available for the kubelet" Dec 12 15:40:34 crc kubenswrapper[5123]: I1212 15:40:34.513974 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45532: no serving certificate available for the kubelet" Dec 12 15:40:34 crc kubenswrapper[5123]: I1212 15:40:34.657956 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45540: no serving certificate available for the kubelet" Dec 12 15:40:34 crc kubenswrapper[5123]: I1212 15:40:34.689888 5123 ???:1] "http: TLS handshake error from 192.168.126.11:45550: no serving certificate available for the kubelet" Dec 12 15:40:59 crc kubenswrapper[5123]: I1212 15:40:59.152378 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42508: no serving certificate available for the kubelet" Dec 12 15:41:00 crc kubenswrapper[5123]: I1212 15:41:00.902823 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:41:00 crc kubenswrapper[5123]: I1212 15:41:00.903359 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:41:22 crc kubenswrapper[5123]: I1212 15:41:22.976649 5123 generic.go:358] "Generic (PLEG): container finished" podID="3c6003a4-c201-4867-bba6-0a8d4c275b40" containerID="0e56db2ba140453666facd1f117d2129a0f0ca66714258f7eb17fff373a75bf5" exitCode=0 Dec 12 15:41:22 crc kubenswrapper[5123]: I1212 15:41:22.976763 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g7l9q/must-gather-n767c" event={"ID":"3c6003a4-c201-4867-bba6-0a8d4c275b40","Type":"ContainerDied","Data":"0e56db2ba140453666facd1f117d2129a0f0ca66714258f7eb17fff373a75bf5"} Dec 12 15:41:22 crc kubenswrapper[5123]: I1212 15:41:22.978096 5123 scope.go:117] "RemoveContainer" containerID="0e56db2ba140453666facd1f117d2129a0f0ca66714258f7eb17fff373a75bf5" Dec 12 15:41:30 crc kubenswrapper[5123]: I1212 15:41:30.902021 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:41:30 crc kubenswrapper[5123]: I1212 15:41:30.902626 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.231364 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42040: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.362997 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42044: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.375001 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42052: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.401602 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42068: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.413436 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42072: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.428596 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42076: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.441312 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42080: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.455018 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42096: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.466413 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42100: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.581097 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42108: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.592932 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42114: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.616496 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42124: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.630736 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42140: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.651540 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42144: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.664316 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42150: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.684053 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42158: no serving certificate available for the kubelet" Dec 12 15:41:32 crc kubenswrapper[5123]: I1212 15:41:32.695534 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42162: no serving certificate available for the kubelet" Dec 12 15:41:36 crc kubenswrapper[5123]: I1212 15:41:36.034681 5123 ???:1] "http: TLS handshake error from 192.168.126.11:42170: no serving certificate available for the kubelet" Dec 12 15:41:37 crc kubenswrapper[5123]: I1212 15:41:37.745501 5123 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g7l9q/must-gather-n767c"] Dec 12 15:41:37 crc kubenswrapper[5123]: I1212 15:41:37.746602 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-g7l9q/must-gather-n767c" podUID="3c6003a4-c201-4867-bba6-0a8d4c275b40" containerName="copy" containerID="cri-o://f770c6ceac1ea7d93be38233c074f022b1c2d1528fb9f993e774ba61970748ad" gracePeriod=2 Dec 12 15:41:37 crc kubenswrapper[5123]: I1212 15:41:37.752019 5123 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g7l9q/must-gather-n767c"] Dec 12 15:41:37 crc kubenswrapper[5123]: I1212 15:41:37.777818 5123 status_manager.go:895] "Failed to get status for pod" podUID="3c6003a4-c201-4867-bba6-0a8d4c275b40" pod="openshift-must-gather-g7l9q/must-gather-n767c" err="pods \"must-gather-n767c\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g7l9q\": no relationship found between node 'crc' and this object" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.213497 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g7l9q_must-gather-n767c_3c6003a4-c201-4867-bba6-0a8d4c275b40/copy/0.log" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.214211 5123 generic.go:358] "Generic (PLEG): container finished" podID="3c6003a4-c201-4867-bba6-0a8d4c275b40" containerID="f770c6ceac1ea7d93be38233c074f022b1c2d1528fb9f993e774ba61970748ad" exitCode=143 Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.387608 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g7l9q_must-gather-n767c_3c6003a4-c201-4867-bba6-0a8d4c275b40/copy/0.log" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.388394 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.390080 5123 status_manager.go:895] "Failed to get status for pod" podUID="3c6003a4-c201-4867-bba6-0a8d4c275b40" pod="openshift-must-gather-g7l9q/must-gather-n767c" err="pods \"must-gather-n767c\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g7l9q\": no relationship found between node 'crc' and this object" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.482666 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sk8f\" (UniqueName: \"kubernetes.io/projected/3c6003a4-c201-4867-bba6-0a8d4c275b40-kube-api-access-8sk8f\") pod \"3c6003a4-c201-4867-bba6-0a8d4c275b40\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.482953 5123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c6003a4-c201-4867-bba6-0a8d4c275b40-must-gather-output\") pod \"3c6003a4-c201-4867-bba6-0a8d4c275b40\" (UID: \"3c6003a4-c201-4867-bba6-0a8d4c275b40\") " Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.490622 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6003a4-c201-4867-bba6-0a8d4c275b40-kube-api-access-8sk8f" (OuterVolumeSpecName: "kube-api-access-8sk8f") pod "3c6003a4-c201-4867-bba6-0a8d4c275b40" (UID: "3c6003a4-c201-4867-bba6-0a8d4c275b40"). InnerVolumeSpecName "kube-api-access-8sk8f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.544015 5123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6003a4-c201-4867-bba6-0a8d4c275b40-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3c6003a4-c201-4867-bba6-0a8d4c275b40" (UID: "3c6003a4-c201-4867-bba6-0a8d4c275b40"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.585171 5123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8sk8f\" (UniqueName: \"kubernetes.io/projected/3c6003a4-c201-4867-bba6-0a8d4c275b40-kube-api-access-8sk8f\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:38 crc kubenswrapper[5123]: I1212 15:41:38.585238 5123 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c6003a4-c201-4867-bba6-0a8d4c275b40-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 15:41:39 crc kubenswrapper[5123]: I1212 15:41:39.225731 5123 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g7l9q_must-gather-n767c_3c6003a4-c201-4867-bba6-0a8d4c275b40/copy/0.log" Dec 12 15:41:39 crc kubenswrapper[5123]: I1212 15:41:39.227347 5123 scope.go:117] "RemoveContainer" containerID="f770c6ceac1ea7d93be38233c074f022b1c2d1528fb9f993e774ba61970748ad" Dec 12 15:41:39 crc kubenswrapper[5123]: I1212 15:41:39.227503 5123 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g7l9q/must-gather-n767c" Dec 12 15:41:39 crc kubenswrapper[5123]: I1212 15:41:39.236497 5123 status_manager.go:895] "Failed to get status for pod" podUID="3c6003a4-c201-4867-bba6-0a8d4c275b40" pod="openshift-must-gather-g7l9q/must-gather-n767c" err="pods \"must-gather-n767c\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g7l9q\": no relationship found between node 'crc' and this object" Dec 12 15:41:39 crc kubenswrapper[5123]: I1212 15:41:39.262121 5123 status_manager.go:895] "Failed to get status for pod" podUID="3c6003a4-c201-4867-bba6-0a8d4c275b40" pod="openshift-must-gather-g7l9q/must-gather-n767c" err="pods \"must-gather-n767c\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g7l9q\": no relationship found between node 'crc' and this object" Dec 12 15:41:39 crc kubenswrapper[5123]: I1212 15:41:39.272474 5123 scope.go:117] "RemoveContainer" containerID="0e56db2ba140453666facd1f117d2129a0f0ca66714258f7eb17fff373a75bf5" Dec 12 15:41:39 crc kubenswrapper[5123]: I1212 15:41:39.650390 5123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6003a4-c201-4867-bba6-0a8d4c275b40" path="/var/lib/kubelet/pods/3c6003a4-c201-4867-bba6-0a8d4c275b40/volumes" Dec 12 15:42:00 crc kubenswrapper[5123]: I1212 15:42:00.902254 5123 patch_prober.go:28] interesting pod/machine-config-daemon-cs4j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:42:00 crc kubenswrapper[5123]: I1212 15:42:00.903013 5123 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:42:00 crc kubenswrapper[5123]: I1212 15:42:00.903102 5123 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" Dec 12 15:42:00 crc kubenswrapper[5123]: I1212 15:42:00.904028 5123 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83cd9799bca9d398afc04a6802de0b3b4e904da201b7f8be51bf382c9e373922"} pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:42:00 crc kubenswrapper[5123]: I1212 15:42:00.904159 5123 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" podUID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerName="machine-config-daemon" containerID="cri-o://83cd9799bca9d398afc04a6802de0b3b4e904da201b7f8be51bf382c9e373922" gracePeriod=600 Dec 12 15:42:01 crc kubenswrapper[5123]: I1212 15:42:01.131778 5123 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:42:01 crc kubenswrapper[5123]: I1212 15:42:01.475720 5123 generic.go:358] "Generic (PLEG): container finished" podID="cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4" containerID="83cd9799bca9d398afc04a6802de0b3b4e904da201b7f8be51bf382c9e373922" exitCode=0 Dec 12 15:42:01 crc kubenswrapper[5123]: I1212 15:42:01.476596 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerDied","Data":"83cd9799bca9d398afc04a6802de0b3b4e904da201b7f8be51bf382c9e373922"} Dec 12 15:42:01 crc kubenswrapper[5123]: I1212 15:42:01.476636 5123 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cs4j6" event={"ID":"cc1dd55e-ab31-4be3-acf1-ef01a53c1bb4","Type":"ContainerStarted","Data":"377b355d960f2b4126d023a093b3577cdb2eacff3c00f07bb1b8d7809d4d60d3"} Dec 12 15:42:01 crc kubenswrapper[5123]: I1212 15:42:01.476656 5123 scope.go:117] "RemoveContainer" containerID="c2cf9081a67059ac5a079b8f43fd2aed11cbd262496baea709c4ede2e91cdc0e"